Academics and non-profit organisations
The RECOLA Database is available for free for Academics and non-profit organisations under the terms of the End User License Agreement (EULA). To obtain a user acount, you need to complete and send all items of the form below along with a signed EULA. Request must come from a permanent researcher (Associate Professor, or full Professor) working in a state University or alike and by using an institutional email adress.
Form to send as plain text (in the body of your email) along with the EULA:
- First name: Fabien
- Last name: Ringeval
- Institution: Laboratoire d'Informatique de Grenoble
- Country: France
- Academic email address: email@example.com
- Proof of permanent affiliation (e.g., academic homepage): https://www.liglab.fr/fr/util/annuaire?prenom=Fabien&nom=RINGEVAL
Please note that we do not have the bandwidth to process requests on the fly - the latest processed request is dated: 26/05/2022.
In order to make the process easier, make sure that your request does not fall in one of those cases:
- EULA signed by a non-permanent researcher
- Form is missing, incomplete or not matching the EULA
- Request is send from a non-institutional email adress, e.g., @gmail.com
- Proof of affiliation is incorrect, e.g., university's homepage is given instead of the one from the applicant
A licence is now available for using the RECOLA Database for commercial purpose. Please contact us for pricing and conditions of use.
Recordings of the database and annotations:
F. Ringeval, A. Sonderegger, J. Sauer and D. Lalanne, "Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions", 2nd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE), in Proc. of IEEE Face & Gestures 2013, Shanghai (China), April 22-26 2013.
Original feature sets from the main repository:
F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J.-P. Thiran, T. Ebrahimi, D. Lalanne and B. Schuller: "Prediction of Asynchronous Dimensional Emotion Ratings from Audio-visual and Physiological Data", Pattern Recognition Letters, ELSEVIER, vol. 66, pp. 22-30, November 2015.
Feature sets and baseline system from AV+EC'15:
F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi, D. Lalanne, R. Cowie and M. Pantic: "AV+EC 2015 – The First Affect Recognition Challenge Bridging Across Audio, Video, and Physiological Data", in Proc. of the 5th Audio/Visual Emotion Challenge and Workshop (AV+EC’15), pp. 3-8, ACM MM, Brisbane, Australia, October 2015.
Feature sets and baseline system from AVEC'16:
M. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne, M. Torres Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic: "AVEC 2016 – Depression, Mood, and Emotion Recognition Workshop and Challenge", in Proc. of the 6th Audio/Visual Emotion Challenge and Workshop (AVEC’16), pp. 3-10, ACM MM, Amsterdam, The Netherlands, October 2016.
Feature sets and baseline system from AVEC'18:
F. Ringeval, B. Schuller, M. Valstar, R. Cowie, H. Kaya, M. Schmitt, S. Amiriparian, N. Cummins, D. Lalanne, A. Michaud, E. Çiftçi, H. Güleç, A. Ali Salah, and M. Pantic, "AVEC 2018 Workshop and Challenge: Bipolar Disorder and Cross-Cultural Affect Recognition", in Proc. of the 8th Audio/Visual Emotion Challenge and Workshop (AVEC’18), October 22, 2018, Seoul, Republic of Korea. ACM, New York, NY, USA.
Annotations and experiments on laughter events:
R. B. Kantharaju, F. Ringeval and L. Besacier, "Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals", in Proc. of the ACM 20th International Conference on Multimodal Interaction (ICMI'18), Bolder, CO, USA.