作者: Pierre Baldi , Yves Chauvin
DOI: 10.1162/NECO.1994.6.2.307
关键词: Algorithm 、 Mathematics 、 Path (graph theory) 、 Viterbi algorithm 、 Markov model 、 Approximation algorithm 、 Probabilistic analysis of algorithms 、 Representation (mathematics) 、 Hidden Markov model 、 Weighted Majority Algorithm
摘要: A simple learning algorithm for Hidden Markov Models (HMMs) is presented together with a number of variations. Unlike other classical algorithms such as the Baum-Welch algorithm, described are smooth and can be used on-line (after each example presentation) or in batch mode, without usual Viterbi most likely path approximation. The have expressions that result from using normalized-exponential representation HMM parameters. All proved to exact approximate gradient optimization respect likelihood, log-likelihood, cross-entropy functions, usually convergent. These also casted more general EM (Expectation-Maximization) framework where they viewed GEM (Generalized Expectation-Maximization) algorithms. mathematical properties derived appendix.