Follow
Qihuang Zhong
Title
Cited by
Cited by
Year
Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert
Q Zhong, L Ding, J Liu, B Du, D Tao
Technical report. arXiv preprint arXiv:2302.10198, 2023
182*2023
Towards making the most of chatgpt for machine translation
K Peng, L Ding, Q Zhong, L Shen, X Liu, M Zhang, Y Ouyang, D Tao
Findings of EMNLP2023, 2023
1202023
Knowledge graph augmented network towards multiview representation learning for aspect-based sentiment analysis
Q Zhong, L Ding, J Liu, B Du, H Jin, D Tao
IEEE TKDE, 2022
622022
A contrastive cross-channel data augmentation framework for aspect-based sentiment analysis
B Wang, L Ding, Q Zhong, X Li, D Tao
COLING2022, 2022
462022
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models
Q Zhong, L Ding, L Shen, P Mi, J Liu, B Du, D Tao
EMNLP2022 Findings, 2022
402022
SemiText: Scene text detection with semi-supervised learning
J Liu, Q Zhong, Y Yuan, H Su, B Du
Neurocomputing 407, 343-353, 2020
292020
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Q Zhong, L Ding, Y Zhan, Y Qiao, Y Wen, L Shen, J Liu, B Yu, B Du, ...
Technical report. arXiv preprint arXiv:2212.01853, 2022
242022
Panda: Prompt transfer meets knowledge distillation for efficient model adaptation
Q Zhong, L Ding, J Liu, B Du, D Tao
IEEE TKDE, 2024
222024
Token-Level Self-Evolution Training for Sequence-to-Sequence Learning
K Peng, L Ding, Q Zhong, Y Ouyang, W Rong, Z Xiong, D Tao
ACL2023 Main, 841-850, 2023
182023
Adasam: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks
H Sun, L Shen, Q Zhong, L Ding, S Chen, J Sun, J Li, G Sun, D Tao
Neural Networks, 2023
162023
Unified instance and knowledge alignment pretraining for aspect-based sentiment analysis
J Liu, Q Zhong, L Ding, H Jin, B Du, D Tao
IEEE TASLP, Co-first author, 2021
162021
E2S2: Encoding-enhanced sequence-to-sequence pretraining for language understanding and generation
Q Zhong, L Ding, J Liu, B Du, D Tao
IEEE TKDE, 2022
152022
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Q Zhong, L Ding, K Peng, J Liu, B Du, L Shen, Y Zhan, D Tao
Technical report. arXiv preprint arXiv:2302.09268, 2023
82023
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Q Zhong, L Ding, J Liu, X Liu, M Zhang, B Du, D Tao
ACL2023 Main, 2023
72023
Self-Evolution Learning for Discriminative Language Model Pretraining
Q Zhong, L Ding, J Liu, B Du, D Tao
ACL2023 Findings, 2023
62023
Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation
Q Zhong, F Zeng, F Liao, J Liu, B Du, JS Shang
Neural Computing and Applications 35 (5), 3665-3676, 2023
52023
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
M Zhu, Q Zhong, L Shen, L Ding, J Liu, B Du, D Tao
EMNLP2023, Co-first author, 2023
22023
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Q Zhong, L Ding, J Liu, B Du, D Tao
arXiv preprint arXiv:2402.11889, 2024
12024
Revisiting Knowledge Distillation for Autoregressive Language Models
Q Zhong, L Ding, L Shen, J Liu, B Du, D Tao
arXiv preprint arXiv:2402.11890, 2024
12024
Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks
H Zheng, Q Zhong, L Ding, Z Tian, X Niu, D Li, D Tao
EMNLP2023, Co-first author, 2023
12023
The system can't perform the operation now. Try again later.
Articles 1–20