Équipe AIMAC (Artificial Intelligence for Multimodal Affective Computing)

In the area of affective computing, we analyze, synthesize and track emotions. We develop methods in the field of artificial intelligence applied to multimodal data processing (image, voice, text).
Real time expression tracking in the emotion space


Keywords: Emotion analysis and synthesis, image, speech, and text processing, deep generative modeling, latent representation learning, mental images, mental load, stress analysis, micro-expression spotting and analysis.


The team offers a new way of representing emotions that makes it possible to follow a person's emotional state over time. This real-time analysis of a user's emotions is multimodal: it exploits the voice, context, gesture and facial expressions. We develop methods to learn latent emotional representations using deep learning techniques, which are applied in a medical context. The research work of the AIMAC team has led to the creation of four startups: Dynamixyz (performance capture), 3D Sound Labs (binaural reproduction), Immersive Therapy (tinnitus app.), and Emobot (diagnosis and continuous monitoring of mood disorders).


2019 First place during the Micro-Expression Grand Challenge (IEEE Automatic Face&Gesture Recognition FG 2019) for Jingting Li (PhD student 2019)

2016 Fourth place during the Depression Challenge (International Workshop on Audio/Visual Emotion Challenge AVEC2016) for Raphaël Weber (PhD student 2016).

2016 Honorable Mention at Eurographics 2016 as part of the Günter Enderle Award for the paper for Vincent Barrielle (PhD student 2016).

Research Projects


ANR collaborative research project (2024-2028)

aiMotions aims at developing a new generation of artificial intelligence (AI) methods for the analysis of food-related emotions. 

Leader: Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB)

Partners: IETR (UMR CNRS 6164), Centre des Sciences du Goût et de l’Alimentation (CSGA)



ANR young researcher (JCJC) project (2024-2027)

DEGREASE aims to develop speech enhancement methods that can leverage real unlabeled noisy and reverberant speech recordings at training time and that can adapt to new unseen acoustic conditions at test time. At the crossroads of audio signal processing, probabilistic graphical modeling, and deep learning, the DEGREASE project proposes a methodology based on deep generative and inference models specifically designed for the processing of multi-microphone speech signals.

Leader: IETR (UMR CNRS 6164)



Écoles Universitaires de Recherche Digital Sport Sciences

Digisport's objective is to create an international graduate school, unique of its kind, in the field of training and research in sports and digital sciences. Dedicated chairs will host recognized international researchers who will work with students, especially at summer schools.

Leader: Benoit Bideau

Partners: Rennes Universities I&II, ENS Rennes, INSA Rennes, CentraleSupélec, ENSAI, CNRS joint laboratories (IRISA, IETR, IRMAR, CREST)



Brittany Region collaborative project "Au Croisement des Filières"  2020-2022

This project seeks to offer a virtual reality experience to customers of water parks. Equipped with a 100% waterproof Virtual Reality headset, the visitor takes place aboard a buoy and embarks on an adventure where real and virtual mixes.

Leader: Polymorph

Partners: Cimtech, CentraleSupélec



National Collaborative Generic Project ANR 2018-2021

Project REFLETS aims to address this situation head-on, by building novel health technology able to channel the psychological mechanism of facial and vocal emotional feedback for clinical application to post-traumatic stress disorders (PTSD) as well as well-being applications in the general population.

Leader: CentraleSupélec

Partners: IRCAM, Dynamixyz, HumanEvo, Chanel, CognacG



Teaching and Research Chair 2018-2023

Supported by Céline Hudelot, this chair aims to put artificial intelligence at the service of recruitment. It addresses three challenges relating to the world of work: 1 / represent the candidate in a more complete and precise manner, 2 / design and optimize the matching between candidate and offer with an issue of algorithmic non-discrimination, and 3 / analyze the emotions and in particular the stress of the candidate to help him to present himself better in front of a video camera. The FAST team is responsible for this third area of work.

Leader: Céline Hudelot (MICS team)

Partners: Randstad, Illuin Technology, CentraleSupélec



Fondation de l'Avenir 2019-2020

Localization during an operation of the neurons invested in emotions analysis. With Dynamixyz tools,  FAST team produces animations of expressive faces and analyzes in real time the direction of the patient's gaze when viewing animations in a virtual reality headset.

Leader: Professor Menei (CHU Angers)

Partners: CentraleSupélec