摘要: This work presents a decentralized, approximate method for performing variational inference on network of learning agents. The key difficulty with decentralized is that most Bayesian models, the use algorithms required, but such destroy symmetry and dependencies in model are crucial to properly combining local models from each individual agent. paper first investigates how schemes break models. Using insights gained investigation, an optimization problem proposed whose solution accounts those broken when posteriors. Experiments synthetic real data demonstrate provides advantages computational performance predictive test likelihood over previous centralized distributed methods.