Grigory Malinovsky
Cited by
Cited by
From local SGD to local fixed-point methods for federated learning
G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik
International Conference on Machine Learning, 6692-6701, 2020
Proxskip: Yes! local gradient steps provably lead to communication acceleration! finally!
K Mishchenko, G Malinovsky, S Stich, P Richtárik
International Conference on Machine Learning, 15750-15769, 2022
Variance reduced proxskip: Algorithm, theory and application to federated learning
G Malinovsky, K Yi, P Richtárik
Advances in Neural Information Processing Systems 35, 15176-15189, 2022
Distributed proximal splitting algorithms with rates and acceleration
L Condat, G Malinovsky, P Richtárik
Frontiers in Signal Processing 1, 776825, 2022
Federated optimization algorithms with random reshuffling and gradient compression
A Sadiev, G Malinovsky, E Gorbunov, I Sokolov, A Khaled, K Burlachenko, ...
arXiv preprint arXiv:2206.07021, 2022
Server-side stepsizes and sampling without replacement provably help in federated optimization
G Malinovsky, K Mishchenko, P Richtárik
Proceedings of the 4th International Workshop on Distributed Machine …, 2023
Random reshuffling with variance reduction: New analysis and better rates
G Malinovsky, A Sailanbayev, P Richtárik
Uncertainty in Artificial Intelligence, 1347-1357, 2023
Can 5th generation local training methods support client sampling? yes!
M Grudzień, G Malinovsky, P Richtárik
International Conference on Artificial Intelligence and Statistics, 1055-1092, 2023
A guide through the zoo of biased SGD
Y Demidovich, G Malinovsky, I Sokolov, P Richtárik
Advances in Neural Information Processing Systems 36, 2024
TAMUNA: Accelerated federated learning with local training and partial participation
LP Condat, G Malinovsky, P Richtarik
arXiv, 2023
Federated learning with regularized client participation
G Malinovsky, S Horváth, K Burlachenko, P Richtárik
arXiv preprint arXiv:2302.03662, 2023
Improving accelerated federated learning with compression and importance sampling
M Grudzień, G Malinovsky, P Richtárik
arXiv preprint arXiv:2306.03240, 2023
Federated random reshuffling with compression and variance reduction
G Malinovsky, P Richtárik
arXiv preprint arXiv:2205.03914, 2022
Averaged heavy-ball method Метод тяжелого шарика с усреднением
MY Danilova, GS Malinovsky
Izhevsk Institute of Computer Science, 2022
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
G Malinovsky, P Richtárik, S Horváth, E Gorbunov
arXiv preprint arXiv:2311.14127, 2023
An optimal algorithm for strongly convex min-min optimization
A Gasnikov, D Kovalev, G Malinovsky
arXiv preprint arXiv:2212.14439, 2022
Minibatch stochastic three points method for unconstrained smooth minimization
S Boucherouite, G Malinovsky, P Richtárik, EH Bergou
Proceedings of the AAAI Conference on Artificial Intelligence 38 (18), 20344 …, 2024
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction
Y Demidovich, G Malinovsky, P Richtárik
arXiv preprint arXiv:2403.06677, 2024
MAST: Model-Agnostic Sparsified Training
Y Demidovich, G Malinovsky, E Shulgin, P Richtárik
arXiv preprint arXiv:2311.16086, 2023
The system can't perform the operation now. Try again later.
Articles 1–19