Follow
Daquan Zhou
Daquan Zhou
Verified email at u.nus.edu
Title
Cited by
Cited by
Year
Coordinate attention for efficient mobile network design
Q Hou, D Zhou, J Feng
CVPR2021, 2021
12512021
Panet: Few-shot image semantic segmentation with prototype alignment
K Wang, JH Liew, Y Zou, D Zhou, J Feng
proceedings of the IEEE/CVF international conference on computer vision …, 2019
6362019
Deepvit: Towards deeper vision transformer
D Zhou, B Kang, X Jin, L Yang, X Lian, Z Jiang, Q Hou, J Feng
arXiv preprint arXiv:2103.11886, 2021
3112021
Rethinking bottleneck structure for efficient mobile network design
D Zhou, Q Hou, Y Chen, J Feng, S Yan
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
1282020
Convbert: Improving bert with span-based dynamic convolution
Z Jiang, W Yu, D Zhou, Y Chen, J Feng, S Yan
NeurIPS2020, 2020
1172020
All tokens matter: Token labeling for training better vision transformers
ZH Jiang, Q Hou, L Yuan, D Zhou, Y Shi, X Jin, A Wang, J Feng
Advances in neural information processing systems 34, 18590-18602, 2021
1042021
Understanding The Robustness in Vision Transformers
D Zhou, Z Yu, E Xie, C Xiao, A Anandkumar, J Feng, JM Alvarez
ICML2022 (preprint version), 2022
652022
Shunted self-attention via multi-scale token aggregation
S Ren, D Zhou, S He, J Feng, X Wang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
542022
Progressive tandem learning for pattern recognition with deep spiking neural networks
J Wu, C Xu, X Han, D Zhou, M Zhang, H Li, KC Tan
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (11), 7824 …, 2021
542021
M^ 2bev: Multi-camera joint 3d detection and segmentation with unified birds-eye view representation
E Xie, Z Yu, D Zhou, J Philion, A Anandkumar, S Fidler, P Luo, JM Alvarez
arXiv preprint arXiv:2204.05088, 2022
462022
Token labeling: Training a 85.5% top-1 accuracy vision transformer with 56m parameters on imagenet
Z Jiang, Q Hou, L Yuan, D Zhou, X Jin, A Wang, J Feng
arXiv preprint arXiv:2104.10858 3 (6), 7, 2021
412021
Refiner: Refining self-attention for vision transformers
D Zhou, Y Shi, B Kang, W Yu, Z Jiang, Y Li, X Jin, Q Hou, J Feng
arXiv preprint arXiv:2106.03714, 2021
382021
Deep Model Reassembly
X Yang, Z Daquan, S Liu, J Ye, X Wang
NeurIPS 2022, 2022
302022
Sharpness-aware training for free
J Du, D Zhou, J Feng, V Tan, JT Zhou
Advances in Neural Information Processing Systems 35, 23439-23451, 2022
262022
Coordinate attention for efficient mobile network design. arXiv 2021
Q Hou, D Zhou, J Feng
arXiv preprint arXiv:2103.02907, 2021
262021
Magicvideo: Efficient video generation with latent diffusion models
D Zhou, W Wang, H Yan, W Lv, Y Zhu, J Feng
arXiv preprint arXiv:2211.11018, 2022
202022
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
D Lian, D Zhou, J Feng, X Wang
NeurIPS22, 2022
182022
Neural epitome search for architecture-agnostic network compression
D Zhou, X Jin, Q Hou, K Wang, J Yang, J Feng
ICLR2020, 2019
15*2019
MagicMix: Semantic Mixing with Diffusion Models
JH Liew, H Yan, D Zhou, J Feng
arXiv preprint arXiv:2210.16056, 2022
122022
AutoSpace: Neural Architecture Search with Less Human Interference
D Zhou, X Jin, X Lian, L Yang, Y Xue, Q Hou, J Feng
ICCV2021 Conference paper, 2021
52021
The system can't perform the operation now. Try again later.
Articles 1–20