作者: M. Korzeń , S. Jaroszewicz , P. Klęsk
DOI: 10.1016/J.CSDA.2013.03.013
关键词: Generalization 、 Laplace operator 、 Applied mathematics 、 Elastic net regularization 、 Logistic regression 、 Feature selection 、 Hyperparameter 、 Mathematics 、 Statistics 、 Prior probability 、 Gaussian
摘要: A generalization of the commonly used Maximum Likelihood based learning algorithm for logistic regression model is considered. It well known that using Laplace prior (L^1 penalty) on coefficients leads to a variable selection effect, when most vanish. argued not always desirable; it often better group correlated variables together and assign equal weights them. Two new kinds priori distributions over are investigated: Gaussian Extremal Mixture (GEM) Laplacian (LEM) which enforce grouping in manner analogous L^1 L^2 regularization. An efficient presented, simultaneously finds hyperparameters those priors. Examples shown experimental part where proposed outperform Gauss priors as other methods take coefficient into account, such elastic net. Theoretical results parameter shrinkage sample complexity also included.