作者: Suleyman S. Kozat , Burak C. Civek
DOI:
关键词: Algorithm 、 Quadratic equation 、 Newton's method 、 Gradient descent 、 Computer science 、 Feature vector 、 Computational complexity theory 、 Mean squared error
摘要: We investigate the problem of sequential linear data prediction for real life big applications. The second order algorithms, i.e., Newton-Raphson Methods, asymptotically achieve performance "best" possible predictor much faster compared to first e.g., Online Gradient Descent. However, implementation these methods is not usually feasible in applications because extremely high computational needs. Regular Methods requires a complexity $O(M^2)$ an $M$ dimensional feature vector, while algorithms need only $O(M)$. To this end, eliminate gap, we introduce highly efficient reducing from quadratic scale. presented algorithm provides well-known merits offering utilize shifted nature consecutive vectors and do rely on any statistical assumptions. Therefore, both regular fast implementations same sense mean square error. demonstrate efficiency our datasets. also illustrate that numerically stable.