作者: Jun Zhu , Bo Zhang , Chongxuan Li
DOI:
关键词:
摘要: Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference input by exploring the ability. However, it is relatively insufficient to empower discriminative ability DGMs making accurate predictions. This paper presents max-margin deep (mmDGMs) a class-conditional variant (mmDCGMs), which explore strongly principle improve predictive performance in both supervised semi-supervised learning, while retaining capability. In we use predictions classifier as missing labels instead full posterior for efficiency; also introduce additional label-balance regularization terms unlabeled effectiveness. We develop an efficient doubly stochastic subgradient algorithm piecewise linear objectives different settings. Empirical results various datasets demonstrate that: (1) can significantly prediction meanwhile retain ability; (2) mmDGMs competitive best fully networks when employing convolutional neural recognition models; (3) mmDCGMs perform achieve state-of-the-art classification several benchmarks.