ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning V Aribandi, Y Tay, T Schuster, J Rao, HS Zheng, SV Mehta, H Zhuang, ... ICLR 2022, 2022 | 198 | 2022 |
Are Pre-trained Convolutions Better than Pre-trained Transformers? Y Tay, M Dehghani, J Gupta, D Bahri, V Aribandi, Z Qin, D Metzler ACL 2021, 2021 | 84 | 2021 |
HyperPrompt: Prompt-based Task-Conditioning of Transformers Y He, HS Zheng, Y Tay, J Gupta, Y Du, V Aribandi, Z Zhao, YG Li, Z Chen, ... ICML 2022, 2022 | 69 | 2022 |
Characterization of Time-variant and Time-invariant Assessment of Suicidality on Reddit using C-SSRS M Gaur, V Aribandi, A Alambo, U Kursuncu, K Thirunarayan, J Beich, ... PLoS ONE 16 (5), e0250448, 2021 | 36 | 2021 |
OmniNet: Omnidirectional Representations from Transformers Y Tay, M Dehghani, V Aribandi, J Gupta, P Pham, Z Qin, D Bahri, DC Juan, ... ICML 2021, 2021 | 35 | 2021 |
Knowledge-infused abstractive summarization of clinical diagnostic interviews: Framework development study G Manas, V Aribandi, U Kursuncu, A Alambo, VL Shalin, K Thirunarayan, ... JMIR Mental Health 8 (5), e20865, 2021 | 26 | 2021 |
How Reliable are Model Diagnostics? V Aribandi, Y Tay, D Metzler ACL Findings 2021, 2021 | 24 | 2021 |
Prediction of refactoring-prone classes using ensemble learning VK Aribandi, L Kumar, L Bhanu Murthy Neti, A Krishna Neural Information Processing: 26th International Conference, ICONIP 2019 …, 2019 | 2 | 2019 |