Follow
Hyung Won Chung
Title
Cited by
Cited by
Year
PaLM: Scaling Language Modeling with Pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
arXiv preprint arXiv:2204.02311, 2022
21122022
Scaling Instruction-Finetuned Language Models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
7892022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
arXiv preprint arXiv:2211.05100, 2022
7132022
Adversarial Attacks Against Medical Deep Learning Systems
SG Finlayson, HW Chung, IS Kohane, AL Beam
arXiv preprint arXiv:1804.05296, 2018
2442018
Energy consumption in desalinating produced water from shale oil and gas extraction
GP Thiel, EW Tow, LD Banchik, HW Chung, JH Lienhard
Desalination 366, 94-112, 2015
2392015
Large Language Models Encode Clinical Knowledge
K Singhal, S Azizi, T Tu, SS Mahdavi, J Wei, HW Chung, N Scales, ...
arXiv preprint arXiv:2212.13138, 2022
2202022
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
arXiv preprint arXiv:2301.13688, 2023
1642023
Energy efficiency of permeate gap and novel conductive gap membrane distillation
J Swaminathan, HW Chung, DM Warsinger, FA AlMarzooqi, HA Arafat
Journal of Membrane Science 502, 171-178, 2016
1602016
Energy efficiency of membrane distillation up to high salinity: Evaluating critical system size and optimal membrane thickness
J Swaminathan, HW Chung, DM Warsinger
Applied energy 211, 715-734, 2018
1522018
Membrane distillation model based on heat exchanger theory and configuration comparison
J Swaminathan, HW Chung, DM Warsinger
Applied Energy 184, 491-505, 2016
1192016
Multistage vacuum membrane distillation (MSVMD) systems for high salinity applications
HW Chung, J Swaminathan, DM Warsinger
Journal of Membrane Science 497, 128-141, 2016
1172016
Combining air recharging and membrane superhydrophobicity for fouling prevention in membrane distillation
DM Warsinger, A Servi, S Van Belleghem, J Gonzalez, J Swaminathan, ...
Journal of Membrane Science 505, 241-252, 2016
1082016
Entropy Generation of Desalination Powered by Variable Temperature Waste Heat
DM Warsinger, KH Mistry, KG Nayar, HW Chung, JH Lienhard V
Entropy 17 (11), 7530-7566, 2015
1082015
Rethinking embedding coupling in pre-trained language models
HW Chung, T Févry, H Tsai, M Johnson, S Ruder
arXiv preprint arXiv:2010.12821, 2020
1072020
UL2: Unifying Language Learning Paradigms
DM Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi ...
arXiv preprint arXiv:2205.05131, 2022
96*2022
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Y Tay, VQ Tran, S Ruder, J Gupta, HW Chung, D Bahri, Z Qin, ...
arXiv preprint arXiv:2106.12672, 2021
932021
Scaling Up Models and Data with and
A Roberts, HW Chung, A Levskaya, G Mishra, J Bradbury, D Andor, ...
arXiv preprint arXiv:2203.17189, 2022
822022
Do Transformer Modifications Transfer Across Implementations and Applications?
S Narang, HW Chung, Y Tay, W Fedus, T Fevry, M Matena, K Malkan, ...
arXiv preprint arXiv:2102.11972, 2021
732021
Ul2: Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
The Eleventh International Conference on Learning Representations, 2022
712022
Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Y Tay, M Dehghani, J Rao, W Fedus, S Abnar, HW Chung, S Narang, ...
arXiv preprint arXiv:2109.10686, 2021
702021
The system can't perform the operation now. Try again later.
Articles 1–20