摘要: There has been significant interest and progress recently in algorithms that solve regression problems involving tall thin matrices input sparsity time. Given a n * d matrix where g d, these find an approximation with fewer rows, allowing one to poly(d) sized problem instead. In practice, the best performances are often obtained by invoking routines iterative fashion. We show methods can be adapted give theoretical guarantees comparable better than current state of art. Our approaches based on computing importances known as leverage scores, manner. alternating between short estimate finding more accurate approximate scores leads series geometrically smaller instances. This gives algorithm whose runtime is plus overhead cost solving approximation. results build upon close connection randomized algorithms, methods, graph sparsification.