Brain Sciences,
Год журнала:
2024,
Номер
14(4), С. 375 - 375
Опубликована: Апрель 12, 2024
Motor
imagery
electroencephalography
(EEG)
signals
have
garnered
attention
in
brain–computer
interface
(BCI)
research
due
to
their
potential
promoting
motor
rehabilitation
and
control.
However,
the
limited
availability
of
labeled
data
poses
challenges
for
training
robust
classifiers.
In
this
study,
we
propose
a
novel
augmentation
method
utilizing
an
improved
Deep
Convolutional
Generative
Adversarial
Network
with
Gradient
Penalty
(DCGAN-GP)
address
issue.
We
transformed
raw
EEG
into
two-dimensional
time–frequency
maps
employed
DCGAN-GP
network
generate
synthetic
representations
resembling
real
data.
Validation
experiments
were
conducted
on
BCI
IV
2b
dataset,
comparing
performance
classifiers
trained
augmented
unaugmented
Results
demonstrated
that
exhibit
enhanced
robustness
across
multiple
subjects
achieve
higher
classification
accuracy.
Our
findings
highlight
effectiveness
DCGAN-GP-generated
improve
classifier
distinguishing
different
tasks.
Thus,
proposed
based
offers
promising
avenue
enhancing
system
performance,
overcoming
scarcity
challenges,
bolstering
robustness,
thereby
providing
substantial
support
broader
adoption
technology
real-world
applications.
BMC Biomedical Engineering,
Год журнала:
2024,
Номер
6(1)
Опубликована: Май 2, 2024
Abstract
Since
their
inception
more
than
50
years
ago,
Brain-Computer
Interfaces
(BCIs)
have
held
promise
to
compensate
for
functions
lost
by
people
with
disabilities
through
allowing
direct
communication
between
the
brain
and
external
devices.
While
research
throughout
past
decades
has
demonstrated
feasibility
of
BCI
act
as
a
successful
assistive
technology,
widespread
use
outside
lab
is
still
beyond
reach.
This
can
be
attributed
number
challenges
that
need
addressed
practical
including
limited
data
availability,
temporal
spatial
resolutions
signals
recorded
non-invasively
inter-subject
variability.
In
addition,
very
long
time,
development
been
mainly
confined
specific
simple
patterns,
while
developing
other
applications
relying
on
complex
patterns
proven
infeasible.
Generative
Artificial
Intelligence
(GAI)
recently
emerged
an
artificial
intelligence
domain
in
which
trained
models
used
generate
new
properties
resembling
available
data.
Given
enhancements
observed
domains
possess
similar
development,
GAI
employed
multitude
synthetic
activity;
thereby,
augmenting
activity.
Here,
brief
review
recent
adoption
techniques
overcome
aforementioned
provided
demonstrating
achieved
using
EEG
data,
enhancing
spatiotemporal
resolution
cross-subject
performance
systems
implementing
end-to-end
applications.
could
represent
means
would
transformed
into
prevalent
thereby
improving
quality
life
disabilities,
helping
adopting
emerging
human-computer
interaction
technology
general
use.
Proceedings of the AAAI Conference on Artificial Intelligence,
Год журнала:
2022,
Номер
36(5), С. 5350 - 5358
Опубликована: Июнь 28, 2022
State-of-the-art
brain-to-text
systems
have
achieved
great
success
in
decoding
language
directly
from
brain
signals
using
neural
networks.
However,
current
approaches
are
limited
to
small
closed
vocabularies
which
far
enough
for
natural
communication.
In
addition,
most
of
the
high-performing
require
data
invasive
devices
(e.g.,
ECoG).
this
paper,
we
extend
problem
open
vocabulary
Electroencephalography(EEG)-To-Text
Sequence-To-Sequence
and
zero-shot
sentence
sentiment
classification
on
reading
tasks.
We
hypothesis
that
human
functions
as
a
special
text
encoder
propose
novel
framework
leveraging
pre-trained
models
BART).
Our
model
achieves
40.1%
BLEU-1
score
EEG-To-Text
55.6%
F1
EEG-based
ternary
classification,
significantly
outperforms
supervised
baselines.
Furthermore,
show
our
proposed
can
handle
various
subjects
sources,
showing
potential
high-performance
system
once
sufficient
is
available.
The
code
made
publicly
available
research
purpose
at
https://github.com/MikeWangWZHL/EEG-To-Text.
IEEE Sensors Letters,
Год журнала:
2022,
Номер
6(2), С. 1 - 4
Опубликована: Янв. 13, 2022
In
this
letter,
a
novel
automated
approach
for
recognizing
imagined
commands
using
multichannel
electroencephalogram
(MEEG)
signals
is
presented.
The
multivariate
fast
and
adaptive
empirical
mode
decomposition
method
decomposes
the
MEEG
into
various
modes.
slope
domain
entropy
$L_1$
-norm
features
are
obtained
from
modes
of
signals.
machine
learning
models
such
as
k
-nearest
neighbor,
sparse
representation
classifier,
dictionary
(DL)
techniques
used
command
classification
tasks.
efficacy
proposed
evaluated
public
database
input
has
achieved
average
accuracy
values
60.72,
59.73,
58.78%
DL
model
selected
left
versus
right,
up
down,
forward
backward
based
categorization
Abstract
The
recognition
of
inner
speech,
which
could
give
a
‘voice’
to
patients
that
have
no
ability
speak
or
move,
is
challenge
for
brain-computer
interfaces
(BCIs).
A
shortcoming
the
available
datasets
they
do
not
combine
modalities
increase
performance
speech
recognition.
Multimodal
brain
data
enable
fusion
neuroimaging
with
complimentary
properties,
such
as
high
spatial
resolution
functional
magnetic
resonance
imaging
(fMRI)
and
temporal
electroencephalography
(EEG),
therefore
are
promising
decoding
speech.
This
paper
presents
first
publicly
bimodal
dataset
containing
EEG
fMRI
acquired
nonsimultaneously
during
inner-speech
production.
Data
were
obtained
from
four
healthy,
right-handed
participants
an
task
words
in
either
social
numerical
category.
Each
8-word
stimuli
assessed
40
trials,
resulting
320
trials
each
modality
participant.
aim
this
work
provide
on
contributing
towards
prostheses.
IEEE Journal of Biomedical and Health Informatics,
Год журнала:
2024,
Номер
28(4), С. 2025 - 2036
Опубликована: Янв. 30, 2024
Currently,
emotional
features
in
speech
emotion
recognition
are
typically
extracted
from
the
speeches,
However,
accuracy
can
be
influenced
by
factors
such
as
semantics,
language,
and
cross-speech
datasets.
Achieving
consistent
judgment
with
human
listeners
is
a
key
challenge
for
AI
to
address.
Electroencephalography
(EEG)
signals
prove
an
effective
means
of
capturing
authentic
meaningful
information
humans.
This
positions
EEG
promising
tool
detecting
cues
conveyed
speech.
In
this
study,
we
proposed
novel
approach
named
CS-GAN
that
generates
listener
EEGs
response
speaker's
speech,
specifically
aimed
at
enhancing
cross-subject
recognition.
We
utilized
generative
adversarial
networks
(GANs)
establish
mapping
relationship
between
generate
stimulus-induced
EEGs.
Furthermore,
integrated
compressive
sensing
theory
(CS)
into
GAN-based
generation
method,
thereby
fidelity
diversity
generated
The
were
then
processed
using
CNN-LSTM
model
identify
categories
By
averaging
these
EEGs,
obtained
event-related
potentials
(ERPs)
improve
capability
method.
experimental
results
demonstrate
method
outperform
real
9.31%
tasks.
ERPs
show
improvement
43.59%,
providing
evidence
effectiveness