作者: Małgorzata Bogdan , Ewout van den Berg , Chiara Sabatti , Weijie Su , Emmanuel J. Candès
DOI: 10.1214/15-AOAS842
关键词: Quantile 、 Combinatorics 、 Normal distribution 、 Linear model 、 Lasso (statistics) 、 Linear regression 、 Convex optimization 、 False discovery rate 、 Estimator 、 Mathematics 、 Statistics
摘要: We introduce a new estimator for the vector of coefficients β in linear model y = Xβ + z, where X has dimensions n × p with possibly larger than n. SLOPE, short Sorted L-One Penalized Estimation, is solution to [Formula: see text]where λ1 ≥ λ2 … λ 0 and text] are decreasing absolute values entries b. This convex program we demonstrate algorithm whose computational complexity roughly comparable that classical l1 procedures such as Lasso. Here, regularizer sorted norm, which penalizes regression according their rank: higher rank-that is, stronger signal-the penalty. similar Benjamini Hochberg [J. Roy. Statist. Soc. Ser. B57 (1995) 289-300] procedure (BH) compares more significant p-values stringent thresholds. One notable choice sequence {λ i } given by BH critical text], q ∈ (0, 1) z(α) quantile standard normal distribution. SLOPE aims provide finite sample guarantees on selected model; special interest false discovery rate (FDR), defined expected proportion irrelevant regressors among all predictors. Under orthogonal designs, λBH provably controls FDR at level q. Moreover, it also appears have appreciable inferential properties under general designs while having substantial power, demonstrated series experiments running both simulated real data.