Follow
Shiwei Liu
Shiwei Liu
Verified email at tue.nl - Homepage
Title
Cited by
Cited by
Year
Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware
S Liu, DC Mocanu, ARR Matavalam, Y Pei, M Pechenizkiy
Neural Computing and Applications 33 (7), 2589-2604, 2021
432021
Do we actually need dense over-parameterization? in-time over-parameterization in sparse training
S Liu, L Yin, DC Mocanu, M Pechenizkiy
International Conference on Machine Learning (ICML 2021), 2021
18*2021
Efficient and effective training of sparse recurrent neural networks
S Liu, I Ni’mah, V Menkovski, DC Mocanu, M Pechenizkiy
Neural Computing and Applications, 1-12, 2021
18*2021
Selfish sparse RNN training
S Liu, DC Mocanu, Y Pei, M Pechenizkiy
International Conference on Machine Learning (ICML 2021), 2021
162021
Topological Insights into Sparse Neural Networks
S Liu, T Van der Lee, A Yaman, Z Atashgahi, D Ferraro, G Sokar, ...
European Conference on Machine Learning (ECML 2020), 2020
13*2020
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
Advances in Neural Information Processing Systems (NeurIPS 2021), 2021
112021
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Z Atashgahi, J Pieterse, S Liu, DC Mocanu, R Veldhuis, M Pechenizkiy
arXiv preprint arXiv:1903.07138, 2019
7*2019
Learning Sparse Neural Networks for Better Generalization
S Liu
International Joint Conference on Artificial Intelligence (IJCAI), 2020
42020
On improving deep learning generalization with adaptive sparse connectivity
S Liu, DC Mocanu, M Pechenizkiy
ICML 2019 workshop of Understanding and Improving Generalization in Deep …, 2019
42019
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
S Liu, T Chen, X Chen, L Shen, DC Mocanu, Z Wang, M Pechenizkiy
The International Conference on Learning Representations (ICLR 2022), 2022
32022
Hierarchical Semantic Segmentation using Psychometric Learning
L Yin, V Menkovski, S Liu, M Pechenizkiy
arXiv preprint arXiv:2107.03212, 2021
22021
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
The International Conference on Learning Representations (ICLR 2022), 2021
2*2021
Sparse Neural Network Training with In-Time Over-Parameterization
S Liu
PhD thesis, 2022
2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
S Liu, Y Tian, T Chen, L Shen
arXiv preprint arXiv:2203.02770, 2022
2022
Achieving Personalized Federated Learning with Sparse Local Models
T Huang, S Liu, L Shen, F He, W Lin, D Tao
arXiv preprint arXiv:2201.11380, 2022
2022
Sparse Unbalanced GAN Training with In-Time Over-Parameterization
S Liu, Y Tian, T Chen, L Shen
2021
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity (Poster)
S Liu, T Chen, Z Atashgahi, X Chen, GAZN Sokar, E Mocanu, ...
Sparsity in Neural Networks: Advancing Understanding and Practice 2021, 2021
2021
Network Performance Optimization with Real Time Traffic Prediction in Data Center Network
F Yan, S Liu, N Calabretta
2020 European Conference on Optical Communications (ECOC), 1-4, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–18