An EEG-based multi-modal emotion database with both posed and authentic facial actions for emotion analysis X Li, X Zhang, H Yang, W Duan, W Dai, L Yin 2020 15th IEEE international conference on automatic face and gesture …, 2020 | 22 | 2020 |
Multi-modal learning for au detection based on multi-head fused transformers X Zhang, L Yin 2021 16th IEEE International Conference on Automatic Face and Gesture …, 2021 | 8 | 2021 |
Disagreement matters: Exploring internal diversification for redundant attention in generic facial action analysis X Li, Z Zhang, X Zhang, T Wang, Z Li, H Yang, U Ciftci, Q Ji, J Cohn, L Yin IEEE Transactions on Affective Computing, 2023 | 3 | 2023 |
Multimodal Channel-Mixing: Channel and Spatial Masked AutoEncoder on Facial Action Unit Detection X Zhang, H Yang, T Wang, X Li, L Yin WACV2024; arXiv preprint arXiv:2209.12244, 2022 | 3 | 2022 |
Knowledge-Spreader: Learning Semi-Supervised Facial Action Dynamics by Consistifying Knowledge Granularity X Li, X Zhang, T Wang, L Yin Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 2 | 2023 |
Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding X Zhang, T Wang, X Li, H Yang, L Yin Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 1 | 2023 |
ReactioNet: Learning High-order Facial Behavior from Universal Stimulus-Reaction by Dyadic Relation Reasoning X Li, T Wang, G Zhao, X Zhang, X Kang, L Yin Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | | 2023 |