Facial Biosignals Time–Series Dataset (FBioT): A Visual–Temporal Facial Expression Recognition (VT-FER) Approach DOI Open Access
D. M. Souza, Charlan Dellon da Silva Alves, Jés de Jesus Fiais Cerqueira

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(24), P. 4867 - 4867

Published: Dec. 10, 2024

Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial temporal effects. Spatial challenges include deformations or occlusions of facial geometry, while involve discontinuities in motion observation due high variability poses dynamic conditions such rotation translation. To enhance the analytical precision validation reliability systems, several datasets have been proposed. However, most these focus primarily on characteristics, rely static images, consist short videos captured highly controlled environments. These constraints significantly reduce applicability real-world scenarios. This paper proposes Biosignals Time–Series Dataset (FBioT), novel dataset providing descriptors features extracted common recorded uncontrolled automate construction, we propose Visual–Temporal (VT-FER), method that stabilizes effects using normalized measurements based principles Action Coding System (FACS) generates signature patterns expression movements correlation with events. demonstrate feasibility, applied create pilot version FBioT dataset. resulted approximately 10,000 s public under conditions, which 22 direct virtual metrics representing muscle deformations. During this process, preliminarily labeled qualified 3046 events two emotion classes. As proof concept, classes were input training neural networks, results summarized available an open-source online repository.

Language: Английский

Lightweight emotion analysis solution using tiny machine learning for portable devices DOI

Maocheng Bai,

Xiaosheng Yu

Computers & Electrical Engineering, Journal Year: 2025, Volume and Issue: 123, P. 110038 - 110038

Published: Jan. 10, 2025

Language: Английский

Citations

0

Multi-Head Attention Affinity Diversity Sharing Network for Facial Expression Recognition DOI Open Access
Caixia Zheng, Jiayu Liu, Wei Zhao

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(22), P. 4410 - 4410

Published: Nov. 11, 2024

Facial expressions exhibit inherent similarities, variability, and complexity. In real-world scenarios, challenges such as partial occlusions, illumination changes, individual differences further complicate the task of facial expression recognition (FER). To improve accuracy FER, a Multi-head Attention Affinity Diversity Sharing Network (MAADS) is proposed in this paper. MAADS comprises Feature Discrimination (FDN), an Distraction (ADN), Shared Fusion (SFN). be specific, FDN first integrates attention weights into objective function to capture most discriminative features by using sparse affinity loss. Then, ADN employs multiple parallel networks maximize diversity within spatial units channel units, which guides network focus on distinct, non-overlapping regions. Finally, SFN deconstructs generic parts unique parts, allows learn distinctions between these without having relearn complete from scratch. validate effectiveness method, extensive experiments were conducted several widely used in-the-wild datasets including RAF-DB, AffectNet-7, AffectNet-8, FERPlus, SFEW. achieves 92.93%, 67.14%, 64.55%, 91.58%, 62.41% datasets, respectively. The experimental results indicate that not only outperforms current state-of-the-art methods but also has relatively low computational

Language: Английский

Citations

0

Facial Biosignals Time–Series Dataset (FBioT): A Visual–Temporal Facial Expression Recognition (VT-FER) Approach DOI Open Access
D. M. Souza, Charlan Dellon da Silva Alves, Jés de Jesus Fiais Cerqueira

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(24), P. 4867 - 4867

Published: Dec. 10, 2024

Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial temporal effects. Spatial challenges include deformations or occlusions of facial geometry, while involve discontinuities in motion observation due high variability poses dynamic conditions such rotation translation. To enhance the analytical precision validation reliability systems, several datasets have been proposed. However, most these focus primarily on characteristics, rely static images, consist short videos captured highly controlled environments. These constraints significantly reduce applicability real-world scenarios. This paper proposes Biosignals Time–Series Dataset (FBioT), novel dataset providing descriptors features extracted common recorded uncontrolled automate construction, we propose Visual–Temporal (VT-FER), method that stabilizes effects using normalized measurements based principles Action Coding System (FACS) generates signature patterns expression movements correlation with events. demonstrate feasibility, applied create pilot version FBioT dataset. resulted approximately 10,000 s public under conditions, which 22 direct virtual metrics representing muscle deformations. During this process, preliminarily labeled qualified 3046 events two emotion classes. As proof concept, classes were input training neural networks, results summarized available an open-source online repository.

Language: Английский

Citations

0