Brain Communications,
Journal Year:
2025,
Volume and Issue:
7(2)
Published: Jan. 1, 2025
Abstract
After
a
stroke,
approximately
one-third
of
patients
suffer
from
aphasia,
language
disorder
that
impairs
communication
ability.
Behavioural
tests
are
the
current
standard
to
detect
but
they
time-consuming,
have
limited
ecological
validity
and
require
active
patient
cooperation.
To
address
these
limitations,
we
tested
potential
EEG-based
neural
envelope
tracking
natural
speech.
The
technique
investigates
response
temporal
speech,
which
is
critical
for
speech
understanding
by
encompassing
cues
detecting
segmenting
linguistic
units
(e.g.
phrases,
words
phonemes).
We
recorded
EEG
26
individuals
with
aphasia
in
chronic
phase
after
stroke
(>6
months
post-stroke)
22
healthy
controls
while
listened
25-min
story.
quantified
broadband
frequency
range
as
well
delta,
theta,
alpha,
beta
gamma
bands
using
mutual
information
analyses.
Besides
group
differences
measures,
also
its
suitability
at
individual
level
support
vector
machine
classifier.
further
investigated
reliability
required
recording
length
accurate
detection.
Our
results
showed
had
decreased
encoding
compared
broad,
theta
bands,
aligns
assumed
role
auditory
processing
Neural
effectively
captured
level,
classification
accuracy
83.33%
an
area
under
curve
89.16%.
Moreover,
demonstrated
high-accuracy
detection
can
be
achieved
time-efficient
(5–7
min)
highly
reliable
manner
(split-half
correlations
between
R
=
0.61
0.96
across
bands).
In
this
study,
identified
specific
characteristics
impaired
holding
promise
biomarker
condition.
Furthermore,
demonstrate
discriminate
high
accuracy,
manner.
findings
represent
significant
advance
towards
more
automated,
objective
ecologically
valid
assessments
impairments
aphasia.
PLoS ONE,
Journal Year:
2024,
Volume and Issue:
19(2), P. e0297826 - e0297826
Published: Feb. 8, 2024
Perception
of
sounds
and
speech
involves
structures
in
the
auditory
brainstem
that
rapidly
process
ongoing
stimuli.
The
role
these
processing
can
be
investigated
by
measuring
their
electrical
activity
using
scalp-mounted
electrodes.
However,
typical
analysis
methods
involve
averaging
neural
responses
to
many
short
repetitive
stimuli
bear
little
relevance
daily
listening
environments.
Recently,
subcortical
more
ecologically
relevant
continuous
were
detected
linear
encoding
models.
These
estimate
temporal
response
function
(TRF),
which
is
a
regression
model
minimises
error
between
measured
signal
predictor
derived
from
stimulus.
Using
predictors
highly
non-linear
peripheral
system
may
improve
TRF
estimation
accuracy
peak
detection.
Here,
we
compare
both
simple
complex
models
for
estimating
TRFs
on
electroencephalography
(EEG)
data
24
participants
speech.
We
also
investigate
length
required
TRFs,
find
around
12
minutes
sufficient
clear
wave
V
peaks
(>3
dB
SNR)
seen
nearly
all
participants.
Interestingly,
filterbank-based
yield
SNRs
are
not
significantly
different
those
estimated
nerve,
provided
nonlinear
effects
adaptation
appropriately
modelled.
Crucially,
computing
simpler
than
50
times
faster
compared
model.
This
work
paves
way
efficient
modelling
detection
speech,
lead
improved
diagnosis
metrics
hearing
impairment
assistive
technology.
Journal of Neural Engineering,
Journal Year:
2023,
Volume and Issue:
20(4), P. 041003 - 041003
Published: July 13, 2023
Abstract
Objective.
When
a
person
listens
to
continuous
speech,
corresponding
response
is
elicited
in
the
brain
and
can
be
recorded
using
electroencephalography
(EEG).
Linear
models
are
presently
used
relate
EEG
recording
speech
signal.
The
ability
of
linear
find
mapping
between
these
two
signals
as
measure
neural
tracking
speech.
Such
limited
they
assume
linearity
EEG-speech
relationship,
which
omits
nonlinear
dynamics
brain.
As
an
alternative,
deep
learning
have
recently
been
Approach.
This
paper
reviews
comments
on
deep-learning-based
studies
that
single-
or
multiple-speakers
paradigms.
We
point
out
recurrent
methodological
pitfalls
need
for
standard
benchmark
model
analysis.
Main
results.
gathered
29
studies.
main
issues
we
found
biased
cross-validations,
data
leakage
leading
over-fitted
models,
disproportionate
size
compared
model’s
complexity.
In
addition,
address
requirements
analysis,
such
public
datasets,
common
evaluation
metrics,
good
practices
match-mismatch
task.
Significance.
present
review
summarizing
while
addressing
important
considerations
this
newly
expanding
field.
Our
study
particularly
relevant
given
growing
application
decoding.
Neural
activity
in
the
auditory
system
synchronizes
to
sound
rhythms,
and
brain-environment
synchronization
is
thought
be
fundamental
successful
perception.
Sound
rhythms
are
often
operationalized
terms
of
sound's
amplitude
envelope.
We
hypothesized
that
-
especially
for
music
envelope
might
not
best
capture
complex
spectro-temporal
fluctuations
give
rise
beat
perception
synchronized
neural
activity.
This
study
investigated
(1)
different
musical
features,
(2)
tempo-dependence
synchronization,
(3)
dependence
on
familiarity,
enjoyment,
ease
In
this
electroencephalography
study,
37
human
participants
listened
tempo-modulated
(1-4
Hz).
Independent
whether
analysis
approach
was
based
temporal
response
functions
(TRFs)
or
reliable
components
(RCA),
spectral
flux
as
opposed
evoked
strongest
synchronization.
Moreover,
with
slower
rates,
high
easy-to-perceive
beats
elicited
response.
Our
results
demonstrate
importance
driving
highlight
its
sensitivity
tempo,
salience.
Even
though
human
experience
unfolds
continuously
in
time,
it
is
not
strictly
linear;
instead,
entails
cascading
processes
building
hierarchical
cognitive
structures.
For
instance,
during
speech
perception,
humans
transform
a
varying
acoustic
signal
into
phonemes,
words,
and
meaning,
these
levels
all
have
distinct
but
interdependent
temporal
Time-lagged
regression
using
response
functions
(TRFs)
has
recently
emerged
as
promising
tool
for
disentangling
electrophysiological
brain
responses
related
to
such
complex
models
of
perception.
Here,
we
introduce
the
Eelbrain
Python
toolkit,
which
makes
this
kind
analysis
easy
accessible.
We
demonstrate
its
use,
continuous
sample
paradigm,
with
freely
available
EEG
dataset
audiobook
listening.
A
companion
GitHub
repository
provides
complete
source
code
analysis,
from
raw
data
group-level
statistics.
More
generally,
advocate
hypothesis-driven
approach
experimenter
specifies
hierarchy
time-continuous
representations
that
are
hypothesized
contributed
responses,
uses
those
predictor
variables
signal.
This
analogous
multiple
problem,
addition
time
dimension.
TRF
decomposes
associated
different
by
estimating
multivariate
(mTRF),
quantifying
influence
each
on
function
time(-lags).
allows
asking
two
questions
about
variables:
(1)
Is
there
significant
neural
representation
corresponding
variable?
And
if
so,
(2)
what
characteristics
it?
Thus,
can
be
systematically
combined
evaluated
jointly
model
processing
at
levels.
discuss
applications
approach,
including
potential
linking
algorithmic/representational
theories
through
computational
appropriate
hypotheses.
Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: Jan. 8, 2024
Music
and
speech
are
encountered
daily
unique
to
human
beings.
Both
transformed
by
the
auditory
pathway
from
an
initial
acoustical
encoding
higher
level
cognition.
Studies
of
cortex
have
revealed
distinct
brain
responses
music
speech,
but
differences
may
emerge
in
or
be
inherited
different
subcortical
encoding.
In
first
part
this
study,
we
derived
brainstem
response
(ABR),
a
measure
encoding,
recorded
using
two
analysis
methods.
The
method,
described
previously
acoustically
based,
yielded
very
ABRs
between
sound
classes.
second
however,
developed
here
based
on
physiological
model
periphery,
gave
highly
correlated
speech.
We
determined
superiority
method
through
several
metrics,
suggesting
there
is
no
appreciable
impact
stimulus
class
(i.e.,
vs
speech)
way
acoustics
encoded
subcortically.
study's
part,
considered
cortex.
Our
new
resulted
cortical
becoming
more
similar
with
remaining
differences.
results
taken
together
suggest
that
evidence
for
stimulus-class
dependent
processing
at
not
level.
Data,
Journal Year:
2024,
Volume and Issue:
9(8), P. 94 - 94
Published: July 26, 2024
Researchers
investigating
the
neural
mechanisms
underlying
speech
perception
often
employ
electroencephalography
(EEG)
to
record
brain
activity
while
participants
listen
spoken
language.
The
high
temporal
resolution
of
EEG
enables
study
responses
fast
and
dynamic
signals.
Previous
studies
have
successfully
extracted
characteristics
from
data
and,
conversely,
predicted
features.
Machine
learning
techniques
are
generally
employed
construct
encoding
decoding
models,
which
necessitate
a
substantial
quantity
data.
We
present
SparrKULee,
Speech-evoked
Auditory
Repository
data,
measured
at
KU
Leuven,
comprising
64-channel
recordings
85
young
individuals
with
normal
hearing,
each
whom
listened
90–150
min
natural
speech.
This
dataset
is
more
extensive
than
any
currently
available
in
terms
both
number
per
participant.
It
suitable
for
training
larger
machine
models.
evaluate
using
linear
state-of-the-art
non-linear
models
encoding/decoding
match/mismatch
paradigm,
providing
benchmark
scores
future
research.
Nature Communications,
Journal Year:
2023,
Volume and Issue:
14(1)
Published: Dec. 1, 2023
Even
prior
to
producing
their
first
words,
infants
are
developing
a
sophisticated
speech
processing
system,
with
robust
word
recognition
present
by
4-6
months
of
age.
These
emergent
linguistic
skills,
observed
behavioural
investigations,
likely
rely
on
increasingly
neural
underpinnings.
The
infant
brain
is
known
robustly
track
the
envelope,
however
previous
cortical
tracking
studies
were
unable
demonstrate
presence
phonetic
feature
encoding.
Here
we
utilise
temporal
response
functions
computed
from
electrophysiological
responses
nursery
rhymes
investigate
encoding
features
in
longitudinal
cohort
when
aged
4,
7
and
11
months,
as
well
adults.
analyses
reveal
an
detailed
acoustically
invariant
emerging
over
year
life,
providing
neurophysiological
evidence
that
pre-verbal
human
cortex
learns
categories.
By
contrast,
found
no
credible
for
age-related
increases
acoustic
spectrogram.