Scaled minimax optimality in high-dimensional linear regression: A non-convex algorithmic regularization approach.

作者: Mohamed Ndaoud

DOI:

关键词:

摘要: The question of fast convergence in the classical problem high dimensional linear regression has been extensively studied. Arguably, one fastest procedures practice is Iterative Hard Thresholding (IHT). Still, IHT relies strongly on knowledge true sparsity parameter $s$. In this paper, we present a novel procedure for estimation regression. Taking advantage interplay between estimation, support recovery and optimization achieve both optimal statistical accuracy convergence. main our that it fully adaptive, making more practical than state art methods. Our achieves faster than, instance, algorithms Lasso. Moreover, establish sharp results recovery. As consequence, new iterative hard thresholding algorithm scaled minimax (achieves error oracle knows pattern if possible), adaptive.

参考文章(1)
Chao Gao, Anderson Y. Zhang, Iterative Algorithm for Discrete Structure Recovery arXiv: Statistics Theory. ,(2019)