DOI: 10.1007/978-3-030-05677-3_8
关键词:
摘要: Pouch latent tree models (PLTMs) are a class of probabilistic graphical that generalizes the Gaussian mixture (GMMs). PLTMs produce multiple clusterings simultaneously and have been shown better than GMMs for cluster analysis in previous studies. However, due to considerably higher number possible structures, training is more time-demanding GMMs. This thus has limited application on only small data sets. In this paper, we consider using GPUs exploit two parallelism opportunities, namely element-wise parallelism, PTLMs. We focus clique propagation, since exact inference procedure strenuous task recurrently called each sample model structure during PLTM training. Our experiments with real-world sets show GPU-accelerated implementation can achieve up 52x speedup over sequential running CPUs. The experiment results signify promising potential further improvement full GPUs.