Introduction
The RECOLA multimodal database was designed by a collaborative group constituted of researchers in informatics (Document, Image and Voice Analysis group), and psychology (Cognitive Ergonomics and Work Psychology group), at the Université de Fribourg, Switzerland. It has been created under the framework of the IM2 National Centers of Competence in Research (NCCR), with the objective to developing a computer-mediated communication tool that can exploit automatic sensing of spontaneous human's behaviors to augment interaction with affective feedbacks: project EmotiBoard .
Overview
The database consists of 9.5 hours of audio, visual, and physiological (electrocardiogram, and electrodermal activity) recordings of online dyadic interactions between 46 French speaking participants, who were solving a task in collaboration. Affective and social behaviors naturally expressed by the participants were reported by themselves, at different steps of the study, and by six French-speaking assistants using the ANNEMO web-based annotation tool (time and value 'continuous'), for the first five minutes of interaction; 3.8 / 2.9 hours of annotated audiovisual / multimodal data, respectively.
Amongst the 46 participants, 34 gave their consent to share their data outside of the consortium. This dataset was used for benchmarking emotion recognition systems in several editions of the Audio Visual Emotion Recognition Challenge (AVEC): AV+EC'15, AVEC'16, and AVEC'18. Data from 23 subjects (training and development partitions) are publicly available, whereas annotation of participants in the test partition are not -and will not be made- publicly available; performance evaluation on the test partition can be provided instead by following the AVEC guidelines.
The RECOLA database is the very first of its kind and might be of great interest to all research groups working on the automatic sensing of social and affective behaviors expressed by humans in real-life conditions from multimodal cues.