作者: Xin Liu , Yiu-ming Cheung
DOI: 10.1109/TIFS.2013.2293025
关键词:
摘要: This paper proposes a concept of lip motion password (simply called lip-password hereinafter), which is composed embedded in the movement and underlying characteristic motion. It provides double security to visual speaker verification system, where verified by both private information behavioral biometrics motions simultaneously. Accordingly, target saying wrong or an impostor who knows correct will be detected rejected. To this end, we shall present multi-boosted Hidden Markov model (HMM) learning approach such system. Initially, extract group representative features characterize each frame. Then, effective segmentation algorithm addressed segment sequence into small set distinguishable subunits. Subsequently, integrate HMMs with boosting framework associated random subspace method data sharing scheme formulate precise decision boundary for these subunits verification, featuring on high discrimination power. Finally, lip-password, whether spoken pre-registered not, identified based all subunit results learned from HMMs. The experimental show that proposed performs favorably compared state-of-the-art methods.