Yaodong Yu
Cited by
Cited by
Theoretically Principled Trade-off between Robustness and Accuracy
H Zhang, Y Yu, J Jiao, EP Xing, LE Ghaoui, MI Jordan
International Conference on Machine Learning (ICML 2019), 7472--7482, 2019
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Z Yang, Y Yu, C You, J Steinhardt, Y Ma
International Conference on Machine Learning (ICML 2020), 10767-10777, 2020
Learning One-hidden-layer ReLU Networks via Gradient Descent
X Zhang, Y Yu, L Wang, Q Gu
International Conference on Artificial Intelligence and Statistics (AISTATS …, 2019
Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction
Y Yu, KHR Chan, C You, C Song, Y Ma
Advances in Neural Information Processing Systems (NeurIPS 2020), 9422--9434, 2020
Adversarial vision challenge
W Brendel, J Rauber, A Kurakin, N Papernot, B Veliqi, SP Mohanty, ...
The NeurIPS'18 Competition, 129-153, 2020
ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction
KHR Chan, Y Yu, C You, H Qi, J Wright, Y Ma
Journal of Machine Learning Research 23, 1-103, 2022
Data Poisoning Attacks on Multi-Task Relationship Learning
M Zhao, B An, Y Yu, S Liu, SJ Pan
AAAI Conference on Artificial Intelligence (AAAI 2018), 2628-2635, 2018
A primal-dual analysis of global optimality in nonconvex low-rank matrix recovery
X Zhang, L Wang, Y Yu, Q Gu
International Conference on Machine Learning (ICML 2018), 5857-5866, 2018
Boundary thickness and robustness in learning models
Y Yang, R Khanna, Y Yu, A Gholami, K Keutzer, JE Gonzalez, ...
Advances in Neural Information Processing Systems (NeurIPS 2020), 6223--6234, 2020
Third-order Smoothness Helps: Faster Stochastic Optimization Algorithms for Finding Local Minima
Y Yu, P Xu, Q Gu
Advances in Neural Information Processing Systems (NeurIPS 2018), 4530-4540, 2018
Saving gradient and negative curvature computations: Finding local minima more efficiently
Y Yu, D Zou, Q Gu
arXiv preprint arXiv:1712.03950, 2017
Fast Distributionally Robust Learning with Variance Reduced Min-Max Optimization
Y Yu, T Lin, E Mazumdar, MI Jordan
International Conference on Artificial Intelligence and Statistics (AISTATS …, 2022
Adversarial robustness of stabilized neuralodes might be from obfuscated gradients
Y Huang, Y Yu, H Zhang, Y Ma, Y Yao
Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, 2021
Ctrl: Closed-loop transcription to an ldr via minimaxing rate reduction
X Dai, S Tong, M Li, Z Wu, M Psenka, KHR Chan, P Zhai, Y Yu, X Yuan, ...
Entropy 24 (4), 456, 2022
On the Convergence of Stochastic Extragradient for Bilinear Games with Restarted Iteration Averaging
CJ Li, Y Yu, N Loizou, G Gidel, Y Ma, NL Roux, MI Jordan
Proceedings of The 25th International Conference on Artificial Intelligence …, 2022
Understanding generalization in adversarial training via the bias-variance decomposition
Y Yu, Z Yang, E Dobriban, J Steinhardt, Y Ma
arXiv preprint arXiv:2103.09947, 2021
An empirical study of pre-trained vision models on out-of-distribution generalization
Y Yu, H Jiang, D Bahri, H Mobahi, S Kim, AS Rawat, A Veit, Y Ma
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
What you see is what you get: Distributional generalization for algorithm design in deep learning
B Kulynych, YY Yang, Y Yu, J Błasiok, P Nakkiran
arXiv preprint arXiv:2204.03230, 2022
TCT: Convexifying federated learning using bootstrapped neural tangent kernels
Y Yu, A Wei, SP Karimireddy, Y Ma, MI Jordan
arXiv preprint arXiv:2207.06343, 2022
Predicting Out-of-Distribution Error with the Projection Norm
Y Yu, Z Yang, A Wei, Y Ma, J Steinhardt
International Conference on Machine Learning, 25721--25746, 2022
The system can't perform the operation now. Try again later.
Articles 1–20