Wiley Interdisciplinary Reviews Data Mining and Knowledge Discovery,
Journal Year:
2024,
Volume and Issue:
14(6)
Published: Oct. 8, 2024
Abstract
Automatic
emotion
recognition
is
a
burgeoning
field
of
research
and
has
its
roots
in
psychology
cognitive
science.
This
article
comprehensively
reviews
multimodal
recognition,
covering
various
aspects
such
as
theories,
discrete
dimensional
models,
emotional
response
systems,
datasets,
current
trends.
reviewed
179
literature
papers
from
2017
to
2023
reflect
on
the
trends
affective
computing.
covers
modalities
used
based
system
under
four
categories:
subjective
experience
comprising
text
self‐report;
peripheral
physiology
electrodermal,
cardiovascular,
facial
muscle,
respiration
activity;
central
EEG,
neuroimaging,
EOG;
behavior
facial,
vocal,
whole‐body
behavior,
observer
ratings.
review
summarizes
measures
each
modality
states.
provides
an
extensive
list
datasets
their
unique
characteristics.
The
recent
advances
are
grouped
focus
areas
elicitation
strategy,
data
collection
handling,
impact
culture
feature
extraction,
selection,
alignment
signals
across
modalities,
fusion
strategies.
strategies
detailed
this
article,
extracting
shared
representations
different
removing
redundant
features
learning
critical
crucial
for
recognition.
strengths
weaknesses
outcome,
along
with
challenges
future
work
aims
serve
lucid
introduction,
all
novices.
categorized
under:
Fundamental
Concepts
Data
Knowledge
>
Human
Centricity
User
Interaction
Technologies
Cognitive
Computing
Artificial
Intelligence
Scientific Data,
Journal Year:
2023,
Volume and Issue:
10(1)
Published: June 14, 2023
Abstract
This
study
presents
a
new
dataset
AKTIVES
for
evaluating
the
methods
stress
detection
and
game
reaction
using
physiological
signals.
We
collected
data
from
25
children
with
obstetric
brachial
plexus
injury,
dyslexia,
intellectual
disabilities,
typically
developed
during
therapy.
A
wristband
was
used
to
record
(blood
volume
pulse
(BVP),
electrodermal
activity
(EDA),
skin
temperature
(ST)).
Furthermore,
facial
expressions
of
were
recorded.
Three
experts
watched
children’s
videos,
is
labeled
“Stress/No
Stress”
“Reaction/No
Reaction”,
according
videos.
The
technical
validation
supported
high-quality
signals
showed
consistency
between
experts.
Affective Science,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Jan. 15, 2025
Abstract
Most
prior
research
on
basic
emotions
has
relied
upon
posed,
static
displays
that
do
not
accurately
reflect
the
facial
behavior
seen
in
everyday
life.
To
address
this
gap,
present
paper
aims
to
highlight
existing
expression
databases
(FEDBs)
feature
spontaneous
and
dynamic
of
six
emotions.
assist
readers
their
decisions
about
stimulus
selection,
we
comprehensively
review
25
FEDBs
terms
three
key
dimensions:
(a)
conceptual
features
which
thematic
approaches
database
construction
validation,
i.e.,
emotional
content
elicitation
procedures,
encoder
demographics,
measurement
techniques;
(b)
technical
concern
technological
aspects
development,
numbers
duration,
frame
rate,
resolution;
(c)
practical
entail
information
access
potential
ethical
restrictions.
Finally,
outline
some
remaining
challenges
generation
make
recommendations
for
future
research.
Research Square (Research Square),
Journal Year:
2025,
Volume and Issue:
unknown
Published: April 15, 2025
Abstract
Because
facial
expressions
can
vary
greatly,
it
be
challenging
to
identify
emotions
from
face
photographs.
Prior
studies
on
the
use
of
deep
learning
models
for
image
emotion
classification
have
been
conducted
a
variety
datasets
with
restricted
range
expressions.
The
Recognition
dataset,
which
contains
ten
target
emotions—amusement,
awe,
enthusiasm,
liking,
surprise,
anger,
contempt,
fear,
sorrow,
and
neutral—is
used
in
this
work
extend
application
recognition
(FER).
To
transform
video
data
into
photos
enhance
data,
number
preparation
steps
were
taken.
This
paper
suggests
two
methods
creating
Convolutional
Neural
Network
(CNN)
models:
transfer
(fine-tuning)
pre-trained
Inception
V3
Mobile
Net
V2
starting
scratch
using
Taguchi
technique
determine.
In
order
establish
reliable
combination
hyperparameter
settings,
study
developing
(fine-tuned)
V2,
building
technique.
With
an
accuracy
average
F1-score
96%
0.95,
respectively,
test
suggested
model
showed
good
performance
across
experimental
procedures.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: May 18, 2025
Abstract
The
widespread
availability
of
miniaturized
wearable
fitness
trackers
has
enabled
the
monitoring
various
essential
health
parameters.
Utilizing
technology
for
precise
emotion
recognition
during
human
and
computer
interactions
can
facilitate
authentic,
emotionally
aware
contextual
communication.
In
this
paper,
an
system
is
proposed
first
time
to
conduct
experimental
analysis
both
discrete
dimensional
models.
An
ensemble
deep
learning
architecture
considered
that
consists
Long
Short-Term
Memory
Gated
Recurrent
Unit
models
capture
dynamic
temporal
dependencies
within
emotional
data
sequences
effectively.
publicly
available
devices
EMOGNITION
database
utilized
result
reproducibility
comparison.
includes
physiological
signals
recorded
using
Samsung
Galaxy
Watch,
Empatica
E4
wristband,
MUSE
2
Electroencephalogram
(EEG)
headband
a
comprehensive
understanding
emotions.
A
detailed
comparison
all
three
dedicated
been
carried
out
identify
nine
emotions,
exploring
different
bio-signal
combinations.
achieve
average
classification
accuracy
99.14%
99.41%,
respectively.
performance
device
examined
2D
Valence-Arousal
effective
model.
Results
reveal
97.81%
72.94%
Valence
Arousal
dimensions,
acquired
results
demonstrate
promising
outcomes
in
when
compared
with
state-of-the-art
methods.
Scientific Data,
Journal Year:
2024,
Volume and Issue:
11(1)
Published: Aug. 5, 2024
Mixed
emotions
have
attracted
increasing
interest
recently,
but
existing
datasets
rarely
focus
on
mixed
emotion
recognition
from
multimodal
signals,
hindering
the
affective
computing
of
emotions.
On
this
basis,
we
present
a
dataset
with
four
kinds
signals
recorded
while
watching
and
non-mixed
videos.
To
ensure
effective
induction,
first
implemented
rule-based
video
filtering
step
to
select
videos
that
could
elicit
stronger
positive,
negative,
Then,
an
experiment
80
participants
was
conducted,
in
which
data
EEG,
GSR,
PPG,
frontal
face
were
they
watched
selected
clips.
We
also
subjective
emotional
rating
PANAS,
VAD,
amusement-disgust
dimensions.
In
total,
consists
signal
self-assessment
73
participants.
technical
validations
for
induction
classification
physiological
The
average
accuracy
3-class
(i.e.,
mixed)
can
reach
80.96%
when
using
SVM
features
all
modalities,
indicates
possibility
identifying
states.