GPU-accelerated clique tree propagation for pouch latent tree models

作者: Leonard K. M. Poon

DOI: 10.1007/978-3-030-05677-3_8

关键词:

摘要: Pouch latent tree models (PLTMs) are a class of probabilistic graphical that generalizes the Gaussian mixture (GMMs). PLTMs produce multiple clusterings simultaneously and have been shown better than GMMs for cluster analysis in previous studies. However, due to considerably higher number possible structures, training is more time-demanding GMMs. This thus has limited application on only small data sets. In this paper, we consider using GPUs exploit two parallelism opportunities, namely element-wise parallelism, PTLMs. We focus clique propagation, since exact inference procedure strenuous task recurrently called each sample model structure during PLTM training. Our experiments with real-world sets show GPU-accelerated implementation can achieve up 52x speedup over sequential running CPUs. The experiment results signify promising potential further improvement full GPUs.

参考文章(40)
Andrew D. Pangborn, Scalable data clustering using GPUs ,(2010)
Minh Tue, Manoranjan Dash, S. A. Arul Shalom, Graphics hardware based efficient and scalable fuzzy c -means clustering australasian data mining conference. pp. 179- 186 ,(2008)
Ole J. Mengshoel, Lu Zheng, Exploring multiple dimensions of parallelism in junction tree message passing AW'13 Proceedings of the 2013 UAI Conference on Application Workshops: Big Data meet Complex Models and Models for Spatial, Temporal and Network Data - Volume 1024. pp. 87- 96 ,(2013)
Maria Kotsifakou, Theodoros Kasampalis, Hassan Eslami, A GPU implementation of tiled belief propagation on Markov Random Fields formal methods. pp. 143- 146 ,(2013)
Robert G. Cowell, V. Nair, David J. Spiegelhalter, Steffen L. Lauritzen, A. Philip David, M. Jordan, J. Lawless, Probabilistic Networks and Expert Systems In: UNSPECIFIED Springer-Verlag (1999). ,(1999)
Steffen L. Lauritzen, Frank Jensen, Stable local computation with conditional Gaussian distributions Statistics and Computing. ,vol. 11, pp. 191- 203 ,(2001) , 10.1023/A:1008935617754
Scott Grauer-Gray, John Cavazos, Optimizing and auto-tuning belief propagation on the GPU languages and compilers for parallel computing. pp. 121- 135 ,(2010) , 10.1007/978-3-642-19595-2_9
J McLachlan, G, D. Peel, Finite Mixture Models ,(2000)