Follow
Shengding Hu
Shengding Hu
Verified email at mails.tsinghua.edu.cn
Title
Cited by
Cited by
Year
Graph neural networks: A review of methods and applications
J Zhou, G Cui, S Hu, Z Zhang, C Yang, Z Liu, L Wang, C Li, M Sun
AI open 1, 57-81, 2020
57942020
Parameter-efficient fine-tuning of large-scale pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
Nature Machine Intelligence 5 (3), 220-235, 2023
3152023
Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification
S Hu, N Ding, H Wang, Z Liu, J Li, M Sun
3102021
Openprompt: An open-source framework for prompt-learning
N Ding, S Hu, W Zhao, Y Chen, Z Liu, HT Zheng, M Sun
arXiv preprint arXiv:2111.01998, 2021
2422021
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv:2203.06904, 2022
1842022
Enhancing chat language models by scaling high-quality instructional conversations
N Ding, Y Chen, B Xu, Y Qin, Z Zheng, S Hu, Z Liu, M Sun, B Zhou
arXiv preprint arXiv:2305.14233, 2023
1792023
Prototypical verbalizer for prompt-based few-shot tuning
G Cui, S Hu, N Ding, L Huang, Z Liu
arXiv preprint arXiv:2203.09770, 2022
832022
Graph Policy Network for Transferable Active Learning on Graphs
S Hu, Z Xiong, M Qu, X Yuan, MA Côté, Z Liu, J Tang
NeurIPS'20, 2020
622020
Minicpm: Unveiling the potential of small language models with scalable training strategies
S Hu, Y Tu, X Han, C He, G Cui, X Long, Z Zheng, Y Fang, Y Huang, ...
arXiv preprint arXiv:2404.06395, 2024
302024
Copen: Probing conceptual knowledge in pre-trained language models
H Peng, X Wang, S Hu, H Jin, L Hou, J Li, Z Liu, Q Liu
arXiv preprint arXiv:2211.04079, 2022
282022
Decoder-only or encoder-decoder? interpreting language model as a regularized encoder-decoder
Z Fu, W Lam, Q Yu, AMC So, S Hu, Z Liu, N Collier
arXiv preprint arXiv:2304.04052, 2023
202023
Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems
C He, R Luo, Y Bai, S Hu, ZL Thai, J Shen, J Hu, X Han, Y Huang, ...
arXiv preprint arXiv:2402.14008, 2024
102024
Won't get fooled again: Answering questions with false premises
S Hu, Y Luo, H Wang, X Cheng, Z Liu, M Sun
arXiv preprint arXiv:2307.02394, 2023
102023
Sparse structure search for delta tuning
S Hu, Z Zhang, N Ding, Y Wang, Y Wang, Z Liu, M Sun
Advances in Neural Information Processing Systems 35, 9853-9865, 2022
92022
OpenDelta: A plug-and-play library for parameter-efficient adaptation of pre-trained models
S Hu, N Ding, W Zhao, X Lv, Z Zhang, Z Liu, M Sun
arXiv preprint arXiv:2307.03084, 2023
82023
Sparse structure search for parameter-efficient tuning
S Hu, Z Zhang, N Ding, Y Wang, Y Wang, Z Liu, M Sun
arXiv preprint arXiv:2206.07382, 2022
82022
Exploring lottery prompts for pre-trained language models
Y Chen, N Ding, X Wang, S Hu, HT Zheng, Z Liu, P Xie
arXiv preprint arXiv:2305.19500, 2023
52023
Unified view of grokking, double descent and emergent abilities: A perspective from circuits competition
Y Huang, S Hu, X Han, Z Liu, M Sun
arXiv preprint arXiv:2402.15175, 2024
42024
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models
C Song, X Han, Z Zhang, S Hu, X Shi, K Li, C Chen, Z Liu, G Li, T Yang, ...
arXiv preprint arXiv:2402.13516, 2024
42024
KACC: A multi-task benchmark for knowledge abstraction, concretization and completion
J Zhou, S Hu, X Lv, C Yang, Z Liu, W Xu, J Jiang, J Li, M Sun
arXiv preprint arXiv:2004.13631, 2020
42020
The system can't perform the operation now. Try again later.
Articles 1–20