摘要: Sparse Bayesian learning and specifically relevance vector machines have received much attention as a means of achieving parsimonious representations signals in the context regression classification. We provide simplified derivation this paradigm from evidence perspective apply it to problem basis selection overcomplete dictionaries. Furthermore, we prove that stable fixed points resulting algorithm are necessarily sparse, providing solid theoretical justification for adapting methodology tasks. then include simulation studies comparing sparse with pursuit more recent FOCUSS class algorithms, empirically demonstrating superior performance terms average sparsity success rate recovering generative bases.