A discriminative multi-modal adaptation neural network model for video action recognition
Lei Gao,
No information about this author
Kai Liu,
No information about this author
Ling Guan
No information about this author
et al.
Neural Networks,
Journal Year:
2025,
Volume and Issue:
185, P. 107114 - 107114
Published: Jan. 5, 2025
Language: Английский
Emotion Recognition in Human-Machine Interaction and a Review in Interpersonal Communication Perspective
Advances in computational intelligence and robotics book series,
Journal Year:
2024,
Volume and Issue:
unknown, P. 329 - 343
Published: June 30, 2024
Emotions
are
fundamental
to
daily
decision-making
and
overall
wellbeing.
psychophysiological
processes
that
frequently
linked
human-machine
interaction,
it
is
expected
we
will
see
the
creation
of
systems
can
recognize
interpret
human
emotions
in
a
range
ways
as
computers
computer-based
applications
get
more
advanced
pervasive
people's
lives.
Emotion
recognition
able
modify
their
responses
user
experience
based
on
analysis
interpersonal
communication
signals.
The
ability
virtual
assistants
respond
emotionally
effectively,
support
mental
health
by
identifying
users'
emotional
states,
enhancement
interaction
applications.
aim
this
chapter
reviewing
elements
models
now.
Language: Английский
Comprehensive Survey on Recognition of Emotions from Body Gestures
Ramakrishna Gandi
No information about this author
Journal of Informatics Education and Research,
Journal Year:
2025,
Volume and Issue:
5(1)
Published: Jan. 17, 2025
Automatic
emotion
identification
has
emerged
as
a
prominent
area
of
research
during
the
past
decade,
with
applications
in
healthcare,
human-computer
interaction,
and
behavioral
analysis.
Although
facial
expressions
verbal
communication
have
been
thoroughly
examined,
emotions
via
body
gestures
is
still
inadequately
investigated.
Body
gestures,
an
essential
aspect
"body
language"
offer
significant
contextual
indicators
shaped
by
gender
culture
variations.
Recent
breakthroughs
deep
learning
facilitated
development
robust
models
capable
accurately
capturing
complex
human
movements,
hence
enhancing
recognition
precision
adaptability.
This
study
presents
thorough
framework
for
automatic
emotional
encompassing
elements
such
individual
detection,
position
estimation,
representation
learning.
High
computational
costs
need
advanced
algorithms
to
fuse
multimodal
data
add
these
hurdles.
advancements
learning,
shown
great
potential
overcome
issues
improve
accuracy.
work
highlights
applications,
challenges,
future
directions
from
emphasizing
scalable,
robust,
real-world-ready
systems
that
can
enable
emotionally
intelligent
technologies.
Language: Английский
MSF-Net: Multi-stage fusion network for emotion recognition from multimodal signals in scalable healthcare
Information Fusion,
Journal Year:
2025,
Volume and Issue:
unknown, P. 103028 - 103028
Published: Feb. 1, 2025
Language: Английский
Multimodal Emotion Recognition based on Face and Speech using Deep Convolution Neural Network and Long Short Term Memory
Circuits Systems and Signal Processing,
Journal Year:
2025,
Volume and Issue:
unknown
Published: April 25, 2025
Language: Английский
Generalized multisensor wearable signal fusion for emotion recognition from noisy and incomplete data
Smart Health,
Journal Year:
2025,
Volume and Issue:
unknown, P. 100571 - 100571
Published: March 1, 2025
Language: Английский
Leveraging Emotional AI for Improved Human-Computer Interactions
Advances in computational intelligence and robotics book series,
Journal Year:
2024,
Volume and Issue:
unknown, P. 66 - 81
Published: June 6, 2024
Emotions
are
psychophysiological
processes
that
sparked
by
both
conscious
and
unconscious
perceptions
of
things
events.
Mood,
motivation,
temperament,
personality
frequently
linked
to
emotions.
Human-machine
interaction
will
see
the
creation
systems
can
recognize
interpret
human
emotions
in
a
range
ways
as
computers
computer-based
applications
get
more
advanced
pervasive
people's
daily
lives.
More
sympathetic
customized
relationships
between
humans
machines
result
from
efficient
emotion
recognition
human-machine
interactions.
Emotion
able
modify
their
responses
user
experience
based
on
analysis
interpersonal
communication
signals.
The
ability
virtual
assistants
respond
emotionally
effectively,
support
mental
health
identifying
users'
emotional
states,
improvement
customer
interactions
with
responsive
Chabots,
enhancement
human-robot
collaboration
just
few
examples
real-world
applications.
Reviewing
elements
models
now
use
is
aim
this
chapter.
Language: Английский
A Model of Sentiment Analysis for College Music Teaching Based on Musical Expression
Applied Mathematics and Nonlinear Sciences,
Journal Year:
2024,
Volume and Issue:
9(1)
Published: Jan. 1, 2024
Abstract
In
this
paper,
we
first
present
the
structure
of
Hierarchical
Sentiment
Analysis
Model
for
Multimodal
Fusion
(HMAMF).
The
model
uses
Bi-LSTM
method
to
extract
unimodal
music
features
and
a
CME
encoder
feature
fusion.
After
sentiment
analysis,
loss
function
auxiliary
training
dataset
is
obtained
co-trained.
Finally,
application
HMAMF
in
university
teaching
being
explored.
results
show
that
agreement
between
dominant
prediction
>80%,
well-tested.
underwent
35
sessions
when
correct
rate
network
recognition
was
97.19%.
mean
accuracy
model’s
3-time
lengths
from
50
seconds
300
ranged
87.92%
98.20%,
there
slight
decrease
as
length
increased.
mood
beat
were
judged
by
way
highly
consistent
with
students’
delineation
results.
Students
teachers’
satisfaction
performance
analysis
terms
“music
tempo,
rhythm,
mood,
content,
time”
81.15%
85.83%
83.25%
92.39%,
respectively.
Teachers
students
are
satisfied
proposed
paper
at
89.43%
90.97%,
proven
be
suitable
use
process.
Language: Английский
Facial Emotion Recognition for Enhanced Human-Computer Interaction using Deep Learning and Temporal Modeling with BiLSTM
Published: Sept. 18, 2024
Language: Английский
STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition
Fo Hu,
No information about this author
Kailun He,
No information about this author
Mengyuan Qian
No information about this author
et al.
Frontiers in Neuroscience,
Journal Year:
2024,
Volume and Issue:
18
Published: Dec. 10, 2024
Introduction
Emotion
recognition
using
electroencephalography
(EEG)
is
a
key
aspect
of
brain-computer
interface
research.
Achieving
precision
requires
effectively
extracting
and
integrating
both
spatial
temporal
features.
However,
many
studies
focus
on
single
dimension,
neglecting
the
interplay
complementarity
multi-feature
information,
importance
fully
dynamics
to
enhance
performance.
Methods
We
propose
Spatiotemporal
Adaptive
Fusion
Network
(STAFNet),
novel
framework
combining
adaptive
graph
convolution
transformers
accuracy
robustness
EEG-based
emotion
recognition.
The
model
includes
an
convolutional
module
capture
brain
connectivity
patterns
through
dynamic
evolution
multi-structured
transformer
fusion
integrate
latent
correlations
between
features
for
classification.
Results
Extensive
experiments
were
conducted
SEED
SEED-IV
datasets
evaluate
performance
STAFNet.
achieved
accuracies
97.89%
93.64%,
respectively,
outperforming
state-of-the-art
methods.
Interpretability
analyses,
including
confusion
matrices
t-SNE
visualizations,
employed
examine
influence
different
emotions
model's
Furthermore,
investigation
varying
GCN
layer
depths
demonstrated
that
STAFNet
mitigates
over-smoothing
issue
in
deeper
architectures.
Discussion
In
summary,
findings
validate
effectiveness
results
emphasize
critical
role
spatiotemporal
feature
extraction
introduce
innovative
fusion,
advancing
state
art
Language: Английский