Highly accurate model for prediction of lung nodule malignancy with CT scans JL Causey, J Zhang, S Ma, B Jiang, JA Qualls, DG Politte, F Prior, ... Scientific reports 8 (1), 9286, 2018 | 208 | 2018 |
Variational policy gradient method for reinforcement learning with general utilities J Zhang, A Koppel, AS Bedi, C Szepesvari, M Wang Advances in Neural Information Processing Systems 33, 4572--4583, 2020 | 141 | 2020 |
On lower iteration complexity bounds for the convex concave saddle point problems J Zhang, M Hong, S Zhang Mathematical Programming 194 (1), 901-935, 2022 | 122 | 2022 |
A stochastic composite gradient method with incremental variance reduction J Zhang, L Xiao Advances in Neural Information Processing Systems 32, 2019 | 70 | 2019 |
On the convergence and sample efficiency of variance-reduced policy gradient method J Zhang, C Ni, Z Yu, C Szepesvari, M Wang Advances in Neural Information Processing Systems 34, 2228-2240, 2021 | 66 | 2021 |
Multilevel composite stochastic optimization via nested variance reduction J Zhang, L Xiao SIAM Journal on Optimization 31 (2), 1131-1157, 2021 | 63 | 2021 |
From low probability to high confidence in stochastic convex optimization D Davis, D Drusvyatskiy, L Xiao, J Zhang Journal of machine learning research 22 (49), 1-38, 2021 | 43* | 2021 |
Generalization bounds for stochastic saddle point problems J Zhang, M Hong, M Wang, S Zhang International Conference on Artificial Intelligence and Statistics, 568-576, 2021 | 40 | 2021 |
Primal-Dual Optimization Algorithms over Riemannian Manifolds: an Iteration Complexity Analysis J Zhang, S Ma, S Zhang Mathematical Programming. 184, 445–490, 2019 | 39 | 2019 |
A composite randomized incremental gradient method J Zhang, L Xiao International Conference on Machine Learning, 7454-7462, 2019 | 37 | 2019 |
A cubic regularized Newton's method over Riemannian manifolds J Zhang, S Zhang arXiv preprint arXiv:1805.05565, 2018 | 34 | 2018 |
Cautious Reinforcement Learning via Distributional Risk in the Dual Domain J Zhang, AS Bedi, M Wang, A Koppel IEEE Journal on Selected Areas in Information Theory, 2021 | 32 | 2021 |
FFT-based gradient sparsification for the distributed training of deep neural networks L Wang, W Wu, J Zhang, H Liu, G Bosilca, M Herlihy, R Fonseca Proceedings of the 29th International Symposium on High-Performance Parallel …, 2020 | 29 | 2020 |
Cubic regularized Newton method for the saddle point models: A global and local convergence analysis K Huang, J Zhang, S Zhang Journal of Scientific Computing 91 (2), 60, 2022 | 26 | 2022 |
Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization J Zhang, L Xiao Mathematical Programming, 1-43, 2022 | 24 | 2022 |
Adaptive stochastic variance reduction for subsampled Newton method with cubic regularization J Zhang, L Xiao, S Zhang INFORMS Journal on Optimization 4 (1), 45-64, 2022 | 19 | 2022 |
First-order algorithms without lipschitz gradient: A sequential local optimization approach J Zhang, M Hong arXiv preprint arXiv:2010.03194, 2020 | 15* | 2020 |
A sparse completely positive relaxation of the modularity maximization for community detection J Zhang, H Liu, Z Wen, S Zhang SIAM Journal on Scientific Computing 40 (5), A3091-A3120, 2018 | 15 | 2018 |
On the sample complexity and metastability of heavy-tailed policy search in continuous control AS Bedi, A Parayil, J Zhang, M Wang, A Koppel Journal of Machine Learning Research 25 (39), 1-58, 2024 | 13 | 2024 |
On the divergence of decentralized nonconvex optimization M Hong, S Zeng, J Zhang, H Sun SIAM Journal on Optimization 32 (4), 2879-2908, 2022 | 12 | 2022 |