Follow
Ahmed Khaled
Ahmed Khaled
Verified email at princeton.edu - Homepage
Title
Cited by
Cited by
Year
Tighter Theory for Local SGD on Identical and Heterogeneous Data
A Khaled, K Mishchenko, P Richtárik
AISTATS 2020 (arXiv:1909.04746), 2020
4082020
First analysis of local GD on heterogeneous data
A Khaled, K Mishchenko, P Richtárik
arXiv preprint arXiv:1909.04715, 2019
1692019
Better theory for SGD in the nonconvex world
A Khaled, P Richtárik
TMLR, 2020
1462020
Random Reshuffling: Simple Analysis with Vast Improvements
K Mishchenko, A Khaled, P Richtárik
NeurIPS 2020 (arXiv:2006.05988), 2020
1202020
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
A Khaled, O Sebbouh, N Loizou, RM Gower, P Richtárik
JOTA, 2020
372020
Better Communication Complexity for Local SGD
A Khaled, K Mishchenko, P Richtárik
arXiv preprint arXiv:1909.04746v1, 2019
282019
Proximal and federated random reshuffling
K Mishchenko, A Khaled, P Richtárik
International Conference on Machine Learning, 15718-15749, 2022
272022
Gradient descent with compressed iterates
A Khaled, P Richtárik
arXiv preprint arXiv:1909.04716, 2019
272019
Distributed fixed point methods with compressed iterates
S Chraibi, A Khaled, D Kovalev, P Richtárik, A Salim, M Takáč
arXiv preprint arXiv:1912.09925, 2019
242019
Federated optimization algorithms with random reshuffling and gradient compression
A Sadiev, G Malinovsky, E Gorbunov, I Sokolov, A Khaled, K Burlachenko, ...
arXiv preprint arXiv:2206.07021, 2022
182022
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning
E Gasanov, A Khaled, S Horváth, P Richtárik
AISTATS 2022 (arXiv:2111.11556), 2021
132021
Applying fast matrix multiplication to neural networks
A Khaled, AF Atiya, AH Abdel-Gawad
Proceedings of the 35th Annual ACM Symposium on Applied Computing, 1034-1037, 2020
92020
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
A Khaled, K Mishchenko, C Jin
NeurIPS 2023, 2023
42023
Faster federated optimization under second-order similarity
A Khaled, C Jin
ICLR 2023, 2022
32022
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
A Mishkin, A Khaled, Y Wang, A Defazio, RM Gower
arXiv preprint arXiv:2403.04081, 2024
2024
Tuning-Free Stochastic Optimization
A Khaled, C Jin
arXiv preprint arXiv:2402.07793, 2024
2024
A novel analysis of gradient descent under directional smoothness
A Mishkin, A Khaled, A Defazio, RM Gower
OPT 2023: Optimization for Machine Learning, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–17