IEEE Access,
Journal Year:
2024,
Volume and Issue:
12, P. 74930 - 74943
Published: Jan. 1, 2024
Brain-Computer
Interface
(BCI)
is
a
revolutionary
technique
that
employs
wearable
electroencephalography
(EEG)
sensors
and
artificial
intelligence
(AI)
to
monitor
decode
brain
activity.
EEG-based
motor
imagery
(MI)
signal
widely
utilized
in
various
BCI
fields
including
intelligent
healthcare,
robot
control,
smart
homes.
Yet,
the
limited
capability
of
decoding
signals
remains
significant
obstacle
techniques
expansion.
In
this
study,
we
describe
an
architecture
known
as
dual-branch
attention
temporal
convolutional
network
(DB-ATCNet)
for
MI
classification.
DB-ATCNet
improves
classification
performance
with
relatively
fewer
parameters
by
utilizing
channel
attention.
The
model
consists
two
primary
modules:
convolution
(ADBC)
fusion
(ATFC).
ADBC
module
utilizes
extract
low-level
MI-EEG
features
incorporates
improve
spatial
feature
extraction.
ATFC
sliding
windows
self-attention
obtain
high-level
features,
strategies
minimize
information
loss.
achieved
subject-independent
accuracies
87.33%
69.58%
two-class
four-class
tasks,
respectively,
on
PhysioNet
dataset.
On
Competition
IV-2a
dataset,
it
accuracy
71.34%
87.54%
subject-dependent
evaluations,
surpassing
existing
methods.
code
available
at
https://github.com/zk-xju/DB-ATCNet.
Cerebral Cortex,
Journal Year:
2024,
Volume and Issue:
34(2)
Published: Jan. 5, 2024
Abstract
Motor
imagery
(MI)
is
a
cognitive
process
wherein
an
individual
mentally
rehearses
specific
movement
without
physically
executing
it.
Recently,
MI-based
brain–computer
interface
(BCI)
has
attracted
widespread
attention.
However,
accurate
decoding
of
MI
and
understanding
neural
mechanisms
still
face
huge
challenges.
These
seriously
hinder
the
clinical
application
development
BCI
systems
based
on
MI.
Thus,
it
very
necessary
to
develop
new
methods
decode
tasks.
In
this
work,
we
propose
multi-branch
convolutional
network
(MBCNN)
with
temporal
(TCN),
end-to-end
deep
learning
framework
multi-class
We
first
used
MBCNN
capture
electroencephalography
signals
information
spectral
domains
through
different
kernels.
Then,
introduce
TCN
extract
more
discriminative
features.
The
within-subject
cross-session
strategy
validate
classification
performance
dataset
Competition
IV-2a.
results
showed
that
achieved
75.08%
average
accuracy
for
4-class
task
classification,
outperforming
several
state-of-the-art
approaches.
proposed
MBCNN-TCN-Net
successfully
captures
features
decodes
tasks
effectively,
improving
MI-BCIs.
Our
findings
could
provide
significant
potential
systems.
IEEE Transactions on Neural Systems and Rehabilitation Engineering,
Journal Year:
2024,
Volume and Issue:
32, P. 1767 - 1778
Published: Jan. 1, 2024
Robot-assisted
motor
training
is
applied
for
neurorehabilitation
in
stroke
patients,
using
imagery
(MI)
as
a
representative
paradigm
of
brain-computer
interfaces
to
offer
real-life
assistance
individuals
facing
movement
challenges.
However,
the
effectiveness
with
MI
may
vary
depending
on
location
lesion,
which
should
be
considered.
This
paper
introduces
multi-task
electroencephalogram-based
heterogeneous
ensemble
learning
(MEEG-HEL)
specifically
designed
cross-subject
training.
In
proposed
framework,
common
spatial
patterns
were
used
feature
extraction,
and
features
according
lesions
are
shared
selected
through
sequential
forward
floating
selection.
The
ensembles
classifiers.
Nine
patients
chronic
ischemic
participated,
engaging
execution
(ME)
paradigms
involving
finger
tapping.
classification
criteria
established
two
ways,
taking
into
account
characteristics
patients.
session,
first
involved
direction
recognition
task
two-handed
classification,
achieving
performance
0.7419
(±0.0811)
0.7061
(±0.1270)
ME.
second
focused
assessment
lesion
location,
resulting
0.7457
(±0.1317)
0.6791
(±0.1253)
Comparing
specific-subject
except
ME
task,
both
tasks
was
significantly
higher
than
session.
Furthermore,
similar
or
statistically
sessions
compared
baseline
models.
MEEG-HEL
holds
promise
improving
practicality
clinical
settings
facilitating
detection
lesions.
IEEE Transactions on Neural Systems and Rehabilitation Engineering,
Journal Year:
2024,
Volume and Issue:
32, P. 1535 - 1545
Published: Jan. 1, 2024
The
motor
imagery
brain-computer
interface
(MI-BCI)
based
on
electroencephalography
(EEG)
is
a
widely
used
human-machine
paradigm.However,
due
to
the
non-stationarity
and
individual
differences
among
subjects
in
EEG
signals,
decoding
accuracy
limited,
affecting
application
of
MI-BCI.In
this
paper,
we
propose
EISATC-Fusion
model
for
MI
decoding,
consisting
inception
block,
multi-head
selfattention
(MSA),
temporal
convolutional
network
(TCN),
layer
fusion.Specifically,
design
DS
Inception
block
extract
multi-scale
frequency
band
information.And
new
cnnCosMSA
module
CNN
cos
attention
solve
collapse
improve
interpretability
model.The
TCN
improved
by
depthwise
separable
convolution
reduces
parameters
fusion
consists
feature
decision
fusion,
fully
utilizing
features
output
enhances
robustness
model.We
two-stage
training
strategy
training.Early
stopping
prevent
overfitting,
loss
validation
set
are
as
indicators
early
stopping.The
proposed
achieves
within-subject
classification
accuracies
84.57%
87.58%
BCI
Competition
IV
Datasets
2a
2b,
respectively.And
cross-subject
67.42%
71.23%
(by
transfer
learning)
when
with
two
sessions
one
session
Dataset
2a,
respectively.The
demonstrated
through
weight
visualization
method.Index
Terms-Brain-computer
(BCI)