Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency

作者: Jingfeng Wu , Peter L Bartlett , Matus Telgarsky , Bin Yu

DOI:

关键词:

摘要: We consider gradient descent (GD) with a constant stepsize applied to logistic regression with linearly separable data, where the constant stepsize is so large that the loss initially oscillates. We show that GD exits this initial oscillatory phase rapidly -- in steps -- and subsequently achieves an convergence rate after additional steps. Our results imply that, given a budget of steps, GD can achieve an accelerated loss of with an aggressive stepsize , without any use of momentum or variable stepsize schedulers. Our proof technique is versatile and also handles general classification loss functions (where exponential tails are needed for the acceleration), nonlinear predictors in the neural tangent kernel regime, and online stochastic gradient descent (SGD) with a large stepsize, under suitable separability conditions.

参考文章(0)