Follow
Xuezhi Wang
Xuezhi Wang
Research Scientist, Google DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Chain of thought prompting elicits reasoning in large language models
J Wei, X Wang, D Schuurmans, M Bosma, E Chi, Q Le, D Zhou
Neural Information Processing Systems (NeurIPS), 2022, 2022
10038*2022
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research (JMLR), 2023, 2023
52352023
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
JMLR 2024, 2024
31002024
Self-consistency improves chain of thought reasoning in language models
X Wang, J Wei, D Schuurmans, Q Le, E Chi, S Narang, A Chowdhery, ...
ICLR 2023, 2023
2445*2023
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
21922023
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
14352023
Least-to-most prompting enables complex reasoning in large language models
D Zhou, N Schärli, L Hou, J Wei, N Scales, X Wang, D Schuurmans, ...
ICLR 2023, 2023
10842023
Underspecification presents challenges for credibility in modern machine learning
A D'Amour, K Heller, D Moldovan, B Adlam, B Alipanahi, A Beutel, ...
Journal of Machine Learning Research 23 (226), 1-61, 2022
8202022
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
6892024
Large language models as optimizers
C Yang, X Wang, Y Lu, H Liu, QV Le, D Zhou, X Chen
ICLR 2024, 2024
525*2024
Large language models can self-improve
J Huang, SS Gu, L Hou, Y Wu, X Wang, H Yu, J Han
EMNLP 2023, 2023
4412023
Fairness without demographics through adversarially reweighted learning
P Lahoti, A Beutel, J Chen, K Lee, F Prost, N Thain, X Wang, EH Chi
34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
3642020
ToTTo: A Controlled Table-To-Text Generation Dataset
AP Parikh, X Wang, S Gehrmann, M Faruqui, B Dhingra, D Yang, D Das
EMNLP 2020, 2020
3622020
Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2023
281*2023
ESCAPES: evacuation simulation with children, authorities, parents, emotions, and social comparison.
J Tsai, N Fridman, E Bowring, M Brown, S Epstein, GA Kaminka, ...
AAMAS 11, 457-464, 2011
2572011
Language models are multilingual chain-of-thought reasoners
F Shi, M Suzgun, M Freitag, X Wang, S Srivats, S Vosoughi, HW Chung, ...
ICLR 2023, 2023
2382023
Measuring and reducing gendered correlations in pre-trained models
K Webster, X Wang, I Tenney, A Beutel, E Pitler, E Pavlick, J Chen, E Chi, ...
arXiv preprint arXiv:2010.06032, 2020
1512020
Large language models as tool makers
T Cai, X Wang, T Ma, X Chen, D Zhou
ICLR 2024, 2024
1392024
Measure and Improve Robustness in NLP Models: A Survey
X Wang, H Wang, D Yang
NAACL 2022, 2022
1232022
Freshllms: Refreshing large language models with search engine augmentation
T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ...
ACL 2024 Findings, 2024
1212024
The system can't perform the operation now. Try again later.
Articles 1–20