作者: Lawrence Carin , Changyou Chen , Ruiyi Zhang , Jianyi Zhang
DOI:
关键词:
摘要: Particle-optimization-based sampling (POS) is a recently developed effective technique that interactively updates set of particles. A representative algorithm the Stein variational gradient descent (SVGD). We prove, under certain conditions, SVGD experiences theoretical pitfall, {\it i.e.}, particles tend to collapse. As remedy, we generalize POS stochastic setting by injecting random noise into particle updates, thus yielding particle-optimization (SPOS). Notably, for first time, develop {\em non-asymptotic convergence theory} SPOS framework (related SVGD), characterizing in terms 1-Wasserstein distance w.r.t.\! numbers and iterations. Somewhat surprisingly, with same number (not too large) each particle, our theory suggests adopting more does not necessarily lead better approximation target distribution, due limited computational budget numerical errors. This phenomenon also observed verified via an experiment on synthetic data. Extensive experimental results verify demonstrate effectiveness proposed framework.