Follow
Fabian Caba Heilbron
Fabian Caba Heilbron
Verified email at kaust.edu.sa - Homepage
Title
Cited by
Cited by
Year
Activitynet: A large-scale video benchmark for human activity understanding
F Caba Heilbron, V Escorcia, B Ghanem, J Carlos Niebles
Proceedings of the ieee conference on computer vision and pattern …, 2015
26462015
Daps: Deep action proposals for action understanding
V Escorcia, F Caba Heilbron, JC Niebles, B Ghanem
Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The …, 2016
4792016
Fast temporal activity proposals for efficient detection of human actions in untrimmed videos
FC Heilbron, JC Niebles, B Ghanem
Proceedings of the IEEE conference on computer vision and pattern …, 2016
3342016
Temporally distributed networks for fast video semantic segmentation
P Hu, F Caba, O Wang, Z Lin, S Sclaroff, F Perazzi
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
2012020
Real-time semantic segmentation with fast attention
P Hu, F Perazzi, FC Heilbron, O Wang, Z Lin, K Saenko, S Sclaroff
IEEE Robotics and Automation Letters 6 (1), 263-270, 2020
1242020
Diagnosing error in temporal action detectors
H Alwassel, FC Heilbron, V Escorcia, B Ghanem
Proceedings of the European conference on computer vision (ECCV), 256-272, 2018
1152018
Scc: Semantic context cascade for efficient action detection
FC Heilbron, W Barrios, V Escorcia, B Ghanem
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3175 …, 2017
1102017
Action search: Spotting actions in videos and its application to temporal action localization
H Alwassel, FC Heilbron, B Ghanem
Proceedings of the European Conference on Computer Vision (ECCV), 251-266, 2018
1072018
Active speakers in context
JL Alcázar, F Caba, L Mai, F Perazzi, JY Lee, P Arbeláez, B Ghanem
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
782020
Mad: A scalable dataset for language grounding in videos from movie audio descriptions
M Soldan, A Pardo, JL Alcázar, F Caba, C Zhao, S Giancola, B Ghanem
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
712022
The activitynet large-scale activity recognition challenge 2018 summary
B Ghanem, JC Niebles, C Snoek, FC Heilbron, H Alwassel, V Escorcia, ...
arXiv preprint arXiv:1808.03766, 2018
652018
Activitynet challenge 2017 summary
B Ghanem, JC Niebles, C Snoek, FC Heilbron, H Alwassel, R Khrisna, ...
arXiv preprint arXiv:1710.08011, 2017
592017
Refineloc: Iterative refinement for weakly-supervised action localization
A Pardo, H Alwassel, F Caba, A Thabet, B Ghanem
Proceedings of the IEEE/CVF winter conference on applications of computer …, 2021
562021
Maas: Multi-modal assignation for active speaker detection
JL Alcázar, F Caba, AK Thabet, B Ghanem
Proceedings of the IEEE/CVF International Conference on Computer Vision, 265-274, 2021
482021
What do i annotate next? an empirical study of active learning for action localization
FC Heilbron, JY Lee, H Jin, B Ghanem
Proceedings of the European Conference on Computer Vision (ECCV), 199-216, 2018
482018
Robust Manhattan frame estimation from a single RGB-D image
B Ghanem, A Thabet, J Carlos Niebles, F Caba Heilbron
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2015
432015
Collecting and annotating human activities in web videos
F Caba Heilbron, JC Niebles
Proceedings of International Conference on Multimedia Retrieval, 377, 2014
402014
Pivot: Prompting for video continual learning
A Villa, JL Alcázar, M Alfarra, K Alhamoud, J Hurtado, FC Heilbron, A Soto, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
292023
Moviecuts: A new dataset and benchmark for cut type recognition
A Pardo, FC Heilbron, JL Alcázar, A Thabet, B Ghanem
European Conference on Computer Vision, 668-685, 2022
252022
vclimb: A novel video class incremental learning benchmark
A Villa, K Alhamoud, V Escorcia, F Caba, JL Alcázar, B Ghanem
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
232022
The system can't perform the operation now. Try again later.
Articles 1–20