Multimodal emotion recognition: A comprehensive review, trends, and challenges DOI

Manju Priya Arthanarisamy Ramaswamy,

Suja Palaniswamy

Wiley Interdisciplinary Reviews Data Mining and Knowledge Discovery, Journal Year: 2024, Volume and Issue: 14(6)

Published: Oct. 8, 2024

Abstract Automatic emotion recognition is a burgeoning field of research and has its roots in psychology cognitive science. This article comprehensively reviews multimodal recognition, covering various aspects such as theories, discrete dimensional models, emotional response systems, datasets, current trends. reviewed 179 literature papers from 2017 to 2023 reflect on the trends affective computing. covers modalities used based system under four categories: subjective experience comprising text self‐report; peripheral physiology electrodermal, cardiovascular, facial muscle, respiration activity; central EEG, neuroimaging, EOG; behavior facial, vocal, whole‐body behavior, observer ratings. review summarizes measures each modality states. provides an extensive list datasets their unique characteristics. The recent advances are grouped focus areas elicitation strategy, data collection handling, impact culture feature extraction, selection, alignment signals across modalities, fusion strategies. strategies detailed this article, extracting shared representations different removing redundant features learning critical crucial for recognition. strengths weaknesses outcome, along with challenges future work aims serve lucid introduction, all novices. categorized under: Fundamental Concepts Data Knowledge > Human Centricity User Interaction Technologies Cognitive Computing Artificial Intelligence

Language: Английский

A physiological signal database of children with different special needs for stress recognition DOI Creative Commons
Buket Coşkun, Sevket Ay, Duygun Erol Barkana

et al.

Scientific Data, Journal Year: 2023, Volume and Issue: 10(1)

Published: June 14, 2023

Abstract This study presents a new dataset AKTIVES for evaluating the methods stress detection and game reaction using physiological signals. We collected data from 25 children with obstetric brachial plexus injury, dyslexia, intellectual disabilities, typically developed during therapy. A wristband was used to record (blood volume pulse (BVP), electrodermal activity (EDA), skin temperature (ST)). Furthermore, facial expressions of were recorded. Three experts watched children’s videos, is labeled “Stress/No Stress” “Reaction/No Reaction”, according videos. The technical validation supported high-quality signals showed consistency between experts.

Language: Английский

Citations

10

A Review of 25 Spontaneous and Dynamic Facial Expression Databases of Basic Emotions DOI Creative Commons
Hyunwoo Kim, Yifan Bian, Eva G. Krumhuber

et al.

Affective Science, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 15, 2025

Abstract Most prior research on basic emotions has relied upon posed, static displays that do not accurately reflect the facial behavior seen in everyday life. To address this gap, present paper aims to highlight existing expression databases (FEDBs) feature spontaneous and dynamic of six emotions. assist readers their decisions about stimulus selection, we comprehensively review 25 FEDBs terms three key dimensions: (a) conceptual features which thematic approaches database construction validation, i.e., emotional content elicitation procedures, encoder demographics, measurement techniques; (b) technical concern technological aspects development, numbers duration, frame rate, resolution; (c) practical entail information access potential ethical restrictions. Finally, outline some remaining challenges generation make recommendations for future research.

Language: Английский

Citations

0

Integrated CNN-LSTM Model for Emotion Detection Using Physiological Signals from Wearables DOI

Ankita,

Inderveer Singh,

Durgesh Nandini

et al.

Lecture notes in networks and systems, Journal Year: 2025, Volume and Issue: unknown, P. 363 - 374

Published: Jan. 1, 2025

Language: Английский

Citations

0

Use of HRV and EDA in Emotion Recognition. An Experimental Study Applied to Political and Electoral Behaviour DOI
David Córdoba, Ángel Cazorla Martín,

Sandra Soriano Moreno

et al.

Smart innovation, systems and technologies, Journal Year: 2025, Volume and Issue: unknown, P. 517 - 527

Published: Jan. 1, 2025

Language: Английский

Citations

0

CNN in Neural Networks for Image-based Face Emotion Identification on Recognition Datasets DOI

Monalisa Hati

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: April 15, 2025

Abstract Because facial expressions can vary greatly, it be challenging to identify emotions from face photographs. Prior studies on the use of deep learning models for image emotion classification have been conducted a variety datasets with restricted range expressions. The Recognition dataset, which contains ten target emotions—amusement, awe, enthusiasm, liking, surprise, anger, contempt, fear, sorrow, and neutral—is used in this work extend application recognition (FER). To transform video data into photos enhance data, number preparation steps were taken. This paper suggests two methods creating Convolutional Neural Network (CNN) models: transfer (fine-tuning) pre-trained Inception V3 Mobile Net V2 starting scratch using Taguchi technique determine. In order establish reliable combination hyperparameter settings, study developing (fine-tuned) V2, building technique. With an accuracy average F1-score 96% 0.95, respectively, test suggested model showed good performance across experimental procedures.

Language: Английский

Citations

0

Travel experience in public transport: Experience sampling and cardiac activity data for spatial analysis DOI Creative Commons
Esther Bosch, Anna Luther, Klas Ihme

et al.

Scientific Data, Journal Year: 2025, Volume and Issue: 12(1)

Published: April 15, 2025

Language: Английский

Citations

0

Personalization of Affective State Recognition from Physiological Signals: A Review DOI
Bartosz Perz, Przemysław Kazienko

Communications in computer and information science, Journal Year: 2025, Volume and Issue: unknown, P. 143 - 158

Published: Jan. 1, 2025

Language: Английский

Citations

0

An ensemble deep learning framework for emotion recognition through wearable devices multi-modal physiological signals DOI Creative Commons

Durgesh Nandini,

Jyoti Yadav, Vijander Singh

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: May 18, 2025

Abstract The widespread availability of miniaturized wearable fitness trackers has enabled the monitoring various essential health parameters. Utilizing technology for precise emotion recognition during human and computer interactions can facilitate authentic, emotionally aware contextual communication. In this paper, an system is proposed first time to conduct experimental analysis both discrete dimensional models. An ensemble deep learning architecture considered that consists Long Short-Term Memory Gated Recurrent Unit models capture dynamic temporal dependencies within emotional data sequences effectively. publicly available devices EMOGNITION database utilized result reproducibility comparison. includes physiological signals recorded using Samsung Galaxy Watch, Empatica E4 wristband, MUSE 2 Electroencephalogram (EEG) headband a comprehensive understanding emotions. A detailed comparison all three dedicated been carried out identify nine emotions, exploring different bio-signal combinations. achieve average classification accuracy 99.14% 99.41%, respectively. performance device examined 2D Valence-Arousal effective model. Results reveal 97.81% 72.94% Valence Arousal dimensions, acquired results demonstrate promising outcomes in when compared with state-of-the-art methods.

Language: Английский

Citations

0

A multi-modal driver emotion dataset and study: Including facial expressions and synchronized physiological signals DOI
Guoliang Xiang, Song Yao, Hanwen Deng

et al.

Engineering Applications of Artificial Intelligence, Journal Year: 2023, Volume and Issue: 130, P. 107772 - 107772

Published: Dec. 27, 2023

Language: Английский

Citations

9

A Multimodal Dataset for Mixed Emotion Recognition DOI Creative Commons
Pei Yang, Niqi Liu, Xinge Liu

et al.

Scientific Data, Journal Year: 2024, Volume and Issue: 11(1)

Published: Aug. 5, 2024

Mixed emotions have attracted increasing interest recently, but existing datasets rarely focus on mixed emotion recognition from multimodal signals, hindering the affective computing of emotions. On this basis, we present a dataset with four kinds signals recorded while watching and non-mixed videos. To ensure effective induction, first implemented rule-based video filtering step to select videos that could elicit stronger positive, negative, Then, an experiment 80 participants was conducted, in which data EEG, GSR, PPG, frontal face were they watched selected clips. We also subjective emotional rating PANAS, VAD, amusement-disgust dimensions. In total, consists signal self-assessment 73 participants. technical validations for induction classification physiological The average accuracy 3-class (i.e., mixed) can reach 80.96% when using SVM features all modalities, indicates possibility identifying states.

Language: Английский

Citations

3