Improving Efficiency in Parameter Estimation Using the Hamiltonian Monte Carlo Algorithm

作者: Mohammed Alfaki

DOI:

关键词:

摘要: Faculty of Mathematics and Natural Sciences Department Informatics Master Science by Mohammed Alfaki The Hamiltonian Monte Carlo algorithm, or alternately called hybrid is Markov chain technique, which combines a Gibbs sampling update with the Metropolis acceptance-rejection rule. algorithm simulates distribution using dynamics which, involves gradient information to investigate space, thus has better convergence properties than Metropolis–Hastings algorithms. suffers from random walk in generating momentum, an additional error when simulated constant step–size. This thesis investigates three approaches improve performance algorithm. first approach enhances suppressing ordered over–relaxation. second simulation adaptive step–size reduce simulation. third proposal combine two versions into one

参考文章(45)
Sebastian Reich, Benedict J Leimkuhler, Simulating Hamiltonian Dynamics ,(2005)
Geoffrey Hinton, Radford M. Neal, Bayesian learning for neural networks ,(1995)
Brian Borchers, Richard C. Aster, Clifford H. Thurber, Parameter estimation and inverse problems ,(2005)
Radford M. Neal, Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation Proceedings of the NATO Advanced Study Institute on Learning in graphical models. pp. 205- 228 ,(1998) , 10.1007/978-94-011-5014-9_8
Wendy L. Martinez, Angel R. Martinez, Computational Statistics Handbook with MATLAB ,(2015)
Klaus Rossberg, John Luetzelschwab, A first course in analytical mechanics ,(1983)
A.D. Kennedy, Robert Edwards, Hidetoshi Mino, Brian Pendleton, Tuning the generalized Hybrid Monte Carlo algorithm Nuclear Physics B - Proceedings Supplements. ,vol. 47, pp. 781- 784 ,(1996) , 10.1016/0920-5632(96)00173-9
Piet Hut, Jun Makino, Steve McMillan, Building a better leapfrog The Astrophysical Journal. ,vol. 443, ,(1995) , 10.1086/187844