Motionsqueeze: Neural motion feature learning for video understanding H Kwon, M Kim, S Kwak, M Cho Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020 | 153 | 2020 |
Future transformer for long-term action anticipation D Gong, J Lee, M Kim, SJ Ha, M Cho Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 63 | 2022 |
Learning self-similarity in space and time as generalized motion for video action recognition H Kwon, M Kim, S Kwak, M Cho Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2021 | 46 | 2021 |
Relational self-attention: What's missing in attention for video understanding M Kim, H Kwon, C Wang, S Kwak, M Cho Advances in Neural Information Processing Systems 34, 8046-8059, 2021 | 41 | 2021 |
Learning correlation structures for vision transformers M Kim, PH Seo, C Schmid, M Cho Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 5 | 2024 |
Electronic device and method with spatio-temporal self-similarity consideration M Cho, K Heeseung, M Kim, K Suha US Patent App. 17/977,449, 2024 | | 2024 |
Learning Correlation Structures for Vision Transformers Supplementary Material M Kim, PH Seo, C Schmid, M Cho | | |
Relational Self-Attention: What’s Missing in Attention for Video Understanding Supplementary Material M Kim, H Kwon, C Wang, S Kwak, M Cho | | |
StructViT: Learning Correlation Structures for Vision Transformers M Kim, PH Seo, C Schmid, M Cho | | |
Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition Supplementary Material H Kwon, M Kim, S Kwak, M Cho | | |