作者: T. Wark , S. Sridharan
DOI: 10.1109/ICASSP.1998.679685
关键词:
摘要: This paper presents a novel technique for the tracking and extraction of features from lips purpose speaker identification. In noisy or other adverse conditions, identification performance via speech signal can significantly reduce, hence additional information which complement is particular interest. our system, syntactic derived chromatic in lip region. A model contour formed directly information, with no minimization procedure required to refine estimates. Colour are then extracted profiles taken around contour. Further improvement obtained linear discriminant analysis (LDA). Speaker models built based on Gaussian mixture (GMM). Identification experiments performed M2VTS database, encouraging results.