Colossal-ai: A unified deep learning system for large-scale parallel training S Li, H Liu, Z Bian, J Fang, H Huang, Y Liu, B Wang, Y You Proceedings of the 52nd International Conference on Parallel Processing, 766-775, 2023 | 105 | 2023 |
Maximizing parallelism in distributed training for huge neural networks Z Bian, Q Xu, B Wang, Y You arXiv preprint arXiv:2105.14450, 2021 | 36 | 2021 |
Tesseract: Parallelize the Tensor Parallelism Efficiently B Wang, Q Xu, Z Bian, Y You Proceedings of the 51st International Conference on Parallel Processing, 1-11, 2022 | 24 | 2022 |
2.5-dimensional distributed model training B Wang, Q Xu, Z Bian, Y You arXiv e-prints, arXiv: 2105.14500, 2021 | 10 | 2021 |
Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations Z Ma, N Zhao, L Bieske, B Bullwinkel, Y Zhang, Z Luo, S Li, G Liao, ... arXiv preprint arXiv:2405.20195, 2024 | | 2024 |