作者: Bo Thiesson , Chong Wang
DOI:
关键词:
摘要: Described are variational Expectation Maximization (EM) embodiments for learning a mixture model using component-dependent data partitions, where the E-step is sub-linear in sample size while algorithm still maintains provable convergence guarantees. Component-dependent partitions into blocks of items constructed according to hierarchical structure comprised nodes, each node corresponds one and stores statistics computed from corresponding block. A modified EM computes initial R-step updates partitions. This process repeated until convergence. Component membership probabilities constrained such that all belonging particular block partition behave same way. The can therefore consider or chunks via their representative statistics, rather than considering individual items.