Frontiers in Neurorobotics,
Journal Year:
2025,
Volume and Issue:
19
Published: Jan. 22, 2025
Recently,
electroencephalogram
(EEG)
based
on
motor
imagery
(MI)
have
gained
significant
traction
in
brain-computer
interface
(BCI)
technology,
particularly
for
the
rehabilitation
of
paralyzed
patients.
But
low
signal-to-noise
ratio
MI
EEG
makes
it
difficult
to
decode
effectively
and
hinders
development
BCI.
In
this
paper,
a
method
attention-based
multiscale
EEGNet
(AMEEGNet)
was
proposed
improve
decoding
performance
MI-EEG.
First,
three
parallel
EEGNets
with
fusion
transmission
were
employed
extract
high-quality
temporal-spatial
feature
data
from
multiple
scales.
Then,
efficient
channel
attention
(ECA)
module
enhances
acquisition
more
discriminative
spatial
features
through
lightweight
approach
that
weights
critical
channels.
The
experimental
results
demonstrated
model
achieves
accuracies
81.17,
89.83,
95.49%
BCI-2a,
2b
HGD
datasets.
show
AMEEGNet
decodes
features,
providing
novel
perspective
MI-EEG
advancing
future
BCI
applications.
IEEE Transactions on Industrial Informatics,
Journal Year:
2022,
Volume and Issue:
19(2), P. 2249 - 2258
Published: Aug. 9, 2022
The
brain-computer
interface
(BCI)
is
a
cutting-edge
technology
that
has
the
potential
to
change
world.
Electroencephalogram
(EEG)
motor
imagery
(MI)
signal
been
used
extensively
in
many
BCI
applications
assist
disabled
people,
control
devices
or
environments,
and
even
augment
human
capabilities.
However,
limited
performance
of
brain
decoding
restricting
broad
growth
industry.
In
this
article,
we
propose
an
attention-based
temporal
convolutional
network
(ATCNet)
for
EEG-based
classification.
ATCNet
model
utilizes
multiple
techniques
boost
MI
classification
with
relatively
small
number
parameters.
employs
scientific
machine
learning
design
domain-specific
deep
interpretable
explainable
features,
multihead
self-attention
highlight
most
valuable
features
MI-EEG
data,
extract
high-level
convolutional-based
sliding
window
data
efficiently.
proposed
outperforms
current
state-of-the-art
Competition
IV-2a
dataset
accuracy
85.38%
70.97%
subject-dependent
subject-independent
modes,
respectively.
Sensors,
Journal Year:
2023,
Volume and Issue:
23(13), P. 6001 - 6001
Published: June 28, 2023
This
paper
provides
a
comprehensive
overview
of
the
state-of-the-art
in
brain–computer
interfaces
(BCI).
It
begins
by
providing
an
introduction
to
BCIs,
describing
their
main
operation
principles
and
most
widely
used
platforms.
The
then
examines
various
components
BCI
system,
such
as
hardware,
software,
signal
processing
algorithms.
Finally,
it
looks
at
current
trends
research
related
use
for
medical,
educational,
other
purposes,
well
potential
future
applications
this
technology.
concludes
highlighting
some
key
challenges
that
still
need
be
addressed
before
widespread
adoption
can
occur.
By
presenting
up-to-date
assessment
technology,
will
provide
valuable
insight
into
where
field
is
heading
terms
progress
innovation.
Sensors,
Journal Year:
2024,
Volume and Issue:
24(3), P. 877 - 877
Published: Jan. 29, 2024
The
main
purpose
of
this
paper
is
to
provide
information
on
how
create
a
convolutional
neural
network
(CNN)
for
extracting
features
from
EEG
signals.
Our
task
was
understand
the
primary
aspects
creating
and
fine-tuning
CNNs
various
application
scenarios.
We
considered
characteristics
signals,
coupled
with
an
exploration
signal
processing
data
preparation
techniques.
These
techniques
include
noise
reduction,
filtering,
encoding,
decoding,
dimension
among
others.
In
addition,
we
conduct
in-depth
analysis
well-known
CNN
architectures,
categorizing
them
into
four
distinct
groups:
standard
implementation,
recurrent
convolutional,
decoder
architecture,
combined
architecture.
This
further
offers
comprehensive
evaluation
these
covering
accuracy
metrics,
hyperparameters,
appendix
that
contains
table
outlining
parameters
commonly
used
architectures
feature
extraction
IEEE Internet of Things Journal,
Journal Year:
2023,
Volume and Issue:
10(21), P. 18579 - 18588
Published: June 1, 2023
Brain–computer
interface
(BCI)
is
an
innovative
technology
that
utilizes
artificial
intelligence
(AI)
and
wearable
electroencephalography
(EEG)
sensors
to
decode
brain
signals
enhance
the
quality
of
life.
EEG-based
motor
imagery
(MI)
signal
used
in
many
BCI
applications,
including
smart
healthcare,
homes,
robotics
control.
However,
restricted
ability
a
major
factor
preventing
from
expanding
significantly.
In
this
study,
we
introduce
dynamic
attention
temporal
convolutional
network
(D-ATCNet)
for
decoding
MI
signals.
The
D-ATCNet
model
uses
convolution
(Dy-conv)
multilevel
performance
classification
with
relatively
small
number
parameters.
has
two
main
blocks:
1)
2)
convolution.
Dy-conv
encode
low-level
MI-EEG
information
shifted
window
self-attention
extract
high-level
encoded
signal.
proposed
performs
better
than
existing
methods
accuracy
71.3%
subject
independent
87.08%
dependent
using
competition
IV-2a
data
set.
IEEE Transactions on Neural Systems and Rehabilitation Engineering,
Journal Year:
2024,
Volume and Issue:
32, P. 1177 - 1186
Published: Jan. 1, 2024
The
development
of
advanced
prosthetic
devices
that
can
be
seamlessly
used
during
an
individual's
daily
life
remains
a
significant
challenge
in
the
field
rehabilitation
engineering.
This
study
compares
performance
deep
learning
architectures
to
shallow
networks
decoding
motor
intent
for
control
using
electromyography
(EMG)
signals.
Four
neural
network
architectures,
including
feedforward
with
one
hidden
layer,
multiple
layers,
temporal
convolutional
network,
and
squeeze-and-excitation
operations
were
evaluated
real-time,
human-in-the-loop
experiments
able-bodied
participants
individual
amputation.
Our
results
demonstrate
outperform
intent,
representation
effectively
extracting
underlying
information
from
EMG
Furthermore,
observed
improvements
by
consistent
across
both
amputee
participants.
By
employing
instead
more
reliable
precise
prosthesis
achieved,
which
has
potential
significantly
enhance
functionality
improve
quality
individuals
amputations.
Diagnostics,
Journal Year:
2022,
Volume and Issue:
12(12), P. 3060 - 3060
Published: Dec. 6, 2022
Human
falls,
especially
for
elderly
people,
can
cause
serious
injuries
that
might
lead
to
permanent
disability.
Approximately
20-30%
of
the
aged
people
in
United
States
who
experienced
fall
accidents
suffer
from
head
trauma,
injuries,
or
bruises.
Fall
detection
is
becoming
an
important
public
healthcare
problem.
Timely
and
accurate
incident
could
enable
instant
delivery
medical
services
injured.
New
advances
vision-based
technologies,
including
deep
learning,
have
shown
significant
results
action
recognition,
where
some
focus
on
actions.
In
this
paper,
we
propose
automatic
human
system
using
multi-stream
convolutional
neural
networks
with
fusion.
The
based
a
multi-level
image-fusion
approach
every
16
frames
input
video
highlight
movement
differences
within
range.
This
four
consecutive
preprocessed
images
are
fed
new
proposed
efficient
lightweight
CNN
model
four-branch
architecture
(4S-3DCNN)
classifies
whether
there
fall.
evaluation
included
use
more
than
6392
generated
sequences
Le2i
dataset,
which
publicly
available
dataset.
method,
three-fold
cross-validation
validate
generalization
susceptibility
overfitting,
achieved
99.03%,
99.00%,
99.68%,
99.00%
accuracy,
sensitivity,
specificity,
precision,
respectively.
experimental
prove
outperforms
state-of-the-art
models,
GoogleNet,
SqueezeNet,
ResNet18,
DarkNet19,
detection.