Laurence Devillers
Laurence Devillers
LIMSI-CNRS, Professor Paris-Sorbonne IV
Verified email at
Cited by
Cited by
The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing
F Eyben, KR Scherer, BW Schuller, J Sundberg, E André, C Busso, ...
IEEE transactions on affective computing 7 (2), 190-202, 2015
The INTERSPEECH 2010 paralinguistic challenge
B Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C Müller, ...
Proc. INTERSPEECH 2010, Makuhari, Japan, 2794-2797, 2010
Challenges in real-life emotion annotation and machine learning based detection
L Devillers, L Vidrascu, L Lamel
Neural Networks 18 (4), 407-422, 2005
The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data
E Douglas-Cowie, R Cowie, I Sneddon, C Cox, O Lowry, M Mcrorie, ...
Affective Computing and Intelligent Interaction: Second International …, 2007
Paralinguistics in speech and language—state-of-the-art and the challenge
B Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C MüLler, ...
Computer Speech & Language 27 (1), 4-39, 2013
The relevance of feature type for the automatic classification of emotional user states: low level descriptors and functionals
B Schuller, A Batliner, D Seppi, S Steidl, T Vogt, J Wagner, L Devillers, ...
Fear-type emotion recognition for future audio-based surveillance systems
C Clavel, I Vasilescu, L Devillers, G Richard, T Ehrette
Speech Communication 50 (6), 487-503, 2008
Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs.
L Devillers, L Vidrascu
Interspeech, 2006
Whodunnit–searching for the most important feature types signalling emotion-related user states in speech
A Batliner, S Steidl, B Schuller, D Seppi, T Vogt, J Wagner, L Devillers, ...
Computer Speech & Language 25 (1), 4-28, 2011
Combining efforts for improving automatic classification of emotional user states
A Batliner, S Steidl, B Schuller, D Seppi, K Laskowski, T Vogt, L Devillers, ...
Detection of real-life emotions in call centers.
L Vidrascu, L Devillers
Interspeech 2005 (10), 1841-1844, 2005
Cnn+ lstm architecture for speech emotion recognition with data augmentation
C Etienne, G Fidanza, A Petrovskii, L Devillers, B Schmauch
arXiv preprint arXiv:1802.05630, 2018
Towards a small set of robust acoustic features for emotion recognition: challenges
M Tahon, L Devillers
IEEE/ACM transactions on audio, speech, and language processing 24 (1), 16-28, 2015
Emotion detection in task-oriented spoken dialogues
L Devillers, L Lamel, I Vasilescu
2003 International Conference on Multimedia and Expo. ICME'03. Proceedings …, 2003
EmoTV1: Annotation of real-life emotions for the specification of multimodal affective interfaces
S Abrilian, L Devillers, S Buisine, JC Martin
HCI International 401, 407-408, 2005
Dialog in the RAILTEL telephone-based system
S Bennacef, L Devillers, S Rosset, L Lamel
Proceeding of Fourth International Conference on Spoken Language Processing …, 1996
Multimodal databases of everyday emotion: facing up to complexity.
E Douglas-Cowie, L Devillers, JC Martin, R Cowie, S Savvidou, S Abrilian, ...
Interspeech, 813-816, 2005
Multimodal complex emotions: Gesture expressivity and blended facial expressions
JC Martin, R Niewiadomski, L Devillers, S Buisine, C Pelachaud
International Journal of Humanoid Robotics 3 (03), 269-291, 2006
The automatic recognition of emotions in speech
A Batliner, B Schuller, D Seppi, S Steidl, L Devillers, L Vidrascu, T Vogt, ...
Emotion-Oriented Systems: The Humaine Handbook, 71-99, 2011
Real life emotions in French and English TV video clips: an integrated annotation protocol combining continuous and discrete approaches.
L Devillers, R Cowie, JC Martin, E Douglas-Cowie, S Abrilian, M McRorie
LREC, 1105-1110, 2006
The system can't perform the operation now. Try again later.
Articles 1–20