Network Newton-Part II: Convergence Rate and Implementation

Aryan Mokhtari , Alejandro Ribeiro , Qing Ling
arXiv: Optimization and Control

25
2015
Network Newton-Part I: Algorithm and Convergence

Aryan Mokhtari , Alejandro Ribeiro , Qing Ling
arXiv: Optimization and Control

29
2015
Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization

Geert Leus , Aryan Mokhtari , Alejandro Ribeiro , Alec Koppel
arXiv: Optimization and Control

1
2016
Network Newton

Aryan Mokhtari , Alejandro Ribeiro , Qing Ling
asilomar conference on signals, systems and computers

11
2014
Online Optimization in Dynamic Environments: Improved Regret Rates for Strongly Convex Problems

Ali Jadbabaie , Shahin Shahrampour , Aryan Mokhtari , Alejandro Ribeiro
arXiv: Learning

15
2016
Doubly Random Parallel Stochastic Methods for Large Scale Learning

Aryan Mokhtari , Alejandro Ribeiro , Alec Koppel
arXiv: Learning

2
2016
A Decentralized Quasi-Newton Method for Dual Formulations of Consensus Optimization

Aryan Mokhtari , Mark Eisen , Alejandro Ribeiro
arXiv: Optimization and Control

2
2016
A Decentralized Second-Order Method for Dynamic Optimization

Aryan Mokhtari , Alejandro Ribeiro , Qing Ling , Wei Shi
arXiv: Optimization and Control

2016
Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy

Aryan Mokhtari , Alejandro Ribeiro
arXiv: Learning

1
2016
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning

Aryan Mokhtari , Alejandro Ribeiro , Alec Koppel
arXiv: Learning

10
2016
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate

Aryan Mokhtari , Alejandro Ribeiro , Mert Gürbüzbalaban
arXiv: Optimization and Control

3
2016
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate

Aryan Mokhtari , Mark Eisen , Alejandro Ribeiro
arXiv: Optimization and Control

6
2017
10
2017
A Second Order Method for Nonconvex Optimization

Santiago Paternain , Aryan Mokhtari , Alejandro Ribeiro

2
2017
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

Aryan Mokhtari , Amin Karbasi , Hamed Hassani
arXiv: Optimization and Control

5
2017
Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings

Aryan Mokhtari , Amin Karbasi , Hamed Hassani
arXiv: Optimization and Control

9
2018
Direct Runge-Kutta Discretization Achieves Acceleration

Ali Jadbabaie , Aryan Mokhtari , Suvrit Sra , Jingzhao Zhang
arXiv: Optimization and Control

11
2018
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication

Aryan Mokhtari , Peilin Zhao , Hui Qian , Tengfei Zhou
arXiv: Machine Learning

5
2018