NeuroImage,
Journal Year:
2022,
Volume and Issue:
264, P. 119698 - 119698
Published: Oct. 18, 2022
Working
memory
load
can
modulate
speech
perception.
However,
since
perception
and
working
are
both
complex
functions,
it
remains
elusive
how
each
component
of
the
system
interacts
with
processing
stage.
To
investigate
this
issue,
we
concurrently
measure
modulates
neural
activity
tracking
three
levels
linguistic
units,
i.e.,
syllables,
phrases,
sentences,
using
a
multiscale
frequency-tagging
approach.
Participants
engage
in
sentence
comprehension
task
is
manipulated
by
asking
them
to
memorize
either
auditory
verbal
sequences
or
visual
patterns.
It
found
that
similar
manners:
Higher
attenuates
phrases
sentences
but
enhances
syllables.
Since
WM
similarly
influence
responses
speech,
such
influences
may
derive
from
domain-general
system.
More
importantly,
asymmetrically
lower-level
encoding
higher-level
possibly
reflecting
reallocation
attention
induced
mnemonic
load.
Paying
attention
to
one
speaker
in
a
noisy
place
can
be
extremely
difficult,
because
to-be-attended
and
task-irrelevant
speech
compete
for
processing
resources.
We
tested
whether
this
competition
is
restricted
acoustic-phonetic
interference
or
if
it
extends
linguistic
as
well.
Neural
activity
was
recorded
using
Magnetoencephalography
human
participants
were
instructed
attend
natural
presented
ear,
stimuli
the
other.
Task-irrelevant
consisted
either
of
random
sequences
syllables,
syllables
structured
form
coherent
sentences,
hierarchical
frequency-tagging.
find
that
phrasal
structure
represented
neural
response
left
inferior
frontal
posterior
parietal
regions,
indicating
selective
does
not
fully
eliminate
speech.
Additionally,
tracking
regions
enhanced
when
competing
with
stimuli,
suggesting
inherent
between
them
processing.
Linguistic
phrases
are
tracked
in
sentences
even
though
there
is
no
one-to-one
acoustic
phrase
marker
the
physical
signal.
This
phenomenon
suggests
an
automatic
tracking
of
abstract
linguistic
structure
that
endogenously
generated
by
brain.
However,
all
studies
investigating
compare
conditions
where
either
relevant
information
at
timescales
available,
or
this
absent
altogether
(e.g.,
versus
word
lists
during
passive
listening).
It
therefore
unclear
whether
phrasal
related
to
content
language,
rather,
results
as
a
consequence
attending
happen
match
behaviourally
information.
To
investigate
question,
we
presented
participants
with
and
while
recording
their
brain
activity
magnetoencephalography
(MEG).
Participants
performed
passive,
syllable,
word,
word-combination
tasks
corresponding
four
different
rates:
one
they
would
naturally
attend
to,
syllable-rates,
word-rates,
phrasal-rates,
respectively.
We
replicated
overall
findings
stronger
phrasal-rate
measured
mutual
for
compared
across
classical
language
network.
inferior
frontal
gyrus
(IFG)
found
task
effect
suggesting
independent
presence
structure,
well
delta-band
connectivity
task.
These
suggest
extracting
rates
occurs
automatically
without
additional
task,
but
also
IFG
might
be
important
temporal
integration
various
perceptual
domains.
NeuroImage,
Journal Year:
2023,
Volume and Issue:
270, P. 119984 - 119984
Published: Feb. 26, 2023
Speech
comprehension
is
severely
compromised
when
several
people
talk
at
once,
due
to
limited
perceptual
and
cognitive
resources.
In
such
circumstances,
top-down
attention
mechanisms
can
actively
prioritize
processing
of
task-relevant
speech.
However,
behavioral
neural
evidence
suggest
that
this
selection
not
exclusive,
the
system
may
have
sufficient
capacity
process
additional
speech
input
as
well.
Here
we
used
a
data-driven
approach
contrast
two
opposing
hypotheses
regarding
system's
co-represent
competing
speech:
Can
brain
represent
speakers
equally
or
fundamentally
limited,
resulting
in
tradeoffs
between
them?
Neural
activity
was
measured
using
magnetoencephalography
(MEG)
human
participants
heard
concurrent
narratives
engaged
tasks:
Selective
Attention,
where
only
one
speaker
Distributed
both
were
relevant.
Analysis
speech-tracking
revealed
tasks
similar
network
regions
involved
auditory
processing,
attentional
control
processing.
Interestingly,
during
Attention
representation
showed
bias
towards
speaker.
This
line
with
proposed
'bottlenecks'
for
co-representation
suggests
good
performance
on
distributed
be
achieved
by
toggling
over
time.
NeuroImage,
Journal Year:
2023,
Volume and Issue:
272, P. 120040 - 120040
Published: March 17, 2023
During
listening,
brain
activity
tracks
the
rhythmic
structures
of
speech
signals.
Here,
we
directly
dissociated
contribution
neural
envelope
tracking
in
processing
acoustic
cues
from
that
related
to
linguistic
processing.
We
examined
changes
associated
with
comprehension
Noise-Vocoded
(NV)
using
magnetoencephalography
(MEG).
Participants
listened
NV
sentences
a
3-phase
training
paradigm:
(1)
pre-training,
where
stimuli
were
barely
comprehended,
(2)
exposure
original
clear
version
stimulus,
and
(3)
post-training,
same
gained
intelligibility
phase.
Using
this
paradigm,
tested
if
responses
signal
was
modulated
by
its
without
any
change
structure.
To
test
influence
spectral
degradation
on
independently
training,
participants
two
types
(4-band
2-band
speech),
but
only
trained
understand
4-band
speech.
Significant
observed
delta
range
relation
However,
failed
find
direct
effect
both
theta
ranges,
auditory
regions-of-interest
whole-brain
sensor-space
analyses.
This
suggests
acoustics
greatly
response
envelope,
caution
needs
be
taken
when
choosing
control
signals
for
speech-brain
analyses,
considering
slight
parameters
can
have
strong
effects
response.
NeuroImage,
Journal Year:
2022,
Volume and Issue:
254, P. 119150 - 119150
Published: March 26, 2022
Electroencephalography
(EEG)
is
a
non-invasive
and
painless
recording
of
cerebral
activity,
particularly
well-suited
for
studying
young
infants,
allowing
the
inspection
responses
in
constellation
different
ways.
Of
particular
interest
developmental
cognitive
neuroscientists
use
rhythmic
stimulation,
analysis
steady-state
evoked
potentials
(SS-EPs)
–
an
approach
also
known
as
frequency
tagging.
In
this
paper
we
rely
on
existing
SS-EP
early
literature
to
illustrate
important
advantages
SS-EPs
developing
brain.
We
argue
that
(1)
technique
both
objective
predictive:
response
expected
at
stimulation
(and/or
higher
harmonics),
(2)
its
high
spectral
specificity
makes
computed
robust
artifacts,
(3)
allows
short
efficient
recordings,
compatible
with
infants'
limited
attentional
spans.
additionally
provide
overview
some
recent
inspiring
adult
research,
order
(4)
can
be
implemented
creatively
target
wide
range
neural
processes.
For
all
these
reasons,
expect
play
increasing
role
understanding
Finally,
practical
guidelines
implementing
analyzing
studies.
iScience,
Journal Year:
2023,
Volume and Issue:
26(6), P. 106849 - 106849
Published: May 12, 2023
Selective
attention
modulates
the
neural
tracking
of
speech
in
auditory
cortical
regions.
It
is
unclear
whether
this
attentional
modulation
dominated
by
enhanced
target
tracking,
or
suppression
distraction.
To
settle
long-standing
debate,
we
employed
an
augmented
electroencephalography
(EEG)
speech-tracking
paradigm
with
target,
distractor,
and
neutral
streams.
Concurrent
distractor
(i.e.,
sometimes
relevant)
were
juxtaposed
a
third,
never
task-relevant
stream
serving
as
baseline.
Listeners
had
to
detect
short
repeats
committed
more
false
alarms
originating
from
than
stream.
Speech
revealed
enhancement
but
no
below
(not
speech)
explained
single-trial
accuracy
repeat
detection.
In
sum,
representation
specific
processes
gain
for
behaviorally
relevant
rather
Ear and Hearing,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Feb. 26, 2025
If
task-irrelevant
sounds
are
present
when
someone
is
actively
listening
to
speech,
the
irrelevant
can
cause
distraction,
reducing
word
recognition
performance
and
increasing
effort.
In
some
previous
investigations
into
auditory
stimuli
were
non-speech
(e.g.,
laughter,
animal
sounds,
music),
which
known
elicit
a
variety
of
emotional
responses.
Variations
in
response
sound
could
influence
distraction
effect.
The
goal
this
study
was
examine
relationship
between
arousal
(exciting
versus
calming)
or
valence
(positive
negative)
distraction.
Using
that
have
been
used
previously
task,
we
sought
determine
whether
stimulus
characteristics
affected
verbal
times
(which
serve
as
measure
behavioral
effort).
We
anticipated
perceived
would
be
related
from
target
stimuli.
an
online
19
young
adult
listeners
rated
served
studies
Word
time
data
these
reanalyzed
using
evaluate
effect
category
on
quiet
noise.
addition,
correlation
analyses
conducted
ratings
valence,
arousal,
performance,
times.
presence
performance.
This
observed
generally
for
exciting
(in
noise)
calming
quiet).
also
reaction
Background
noise
increased
by
approximately
35
msec.
all
stimuli,
regardless
category,
more
than
200
msec
relative
condition
with
no
Valenced
caused
largest
times;
there
difference
based
category.
Correlation
dependent
variables
(word
time)
revealed
that,
quiet,
weak,
but
statistically
significant,
(absolute
deviation
neutral)
scores;
valenced
stimulus,
distracting
it
terms
significant
not
evident
participants
completed
speech
task
There
(arousal
valence)
negatively
affect
increase
Future
should
consider
content
evaluating
potential
effects.
Sensory
systems
reliably
process
incoming
stimuli
in
spite
of
changes
context.
Most
recent
models
accredit
this
context
invariance
to
an
extraction
increasingly
complex
sensory
features
hierarchical
feedforward
networks.
Here,
we
study
how
context-invariant
representations
can
be
established
by
feedback
rather
than
processing.
We
show
that
neural
networks
modulated
dynamically
generate
invariant
representations.
The
required
implemented
as
a
slow
and
spatially
diffuse
gain
modulation.
is
not
present
on
the
level
individual
neurons,
but
emerges
only
population
level.
Mechanistically,
modulation
reorients
manifold
activity
thereby
maintains
subspace
contextual
variations.
Our
results
highlight
importance
population-level
analyses
for
understanding
role
flexible
Current Research in Neurobiology,
Journal Year:
2022,
Volume and Issue:
3, P. 100043 - 100043
Published: Jan. 1, 2022
Listening
to
speech
is
difficult
in
noisy
environments,
and
even
harder
when
the
interfering
noise
consists
of
intelligible
as
compared
unintelligible
sounds.
This
suggests
that
competing
linguistic
information
interferes
with
neural
processing
target
speech.
Interference
could
either
arise
from
a
degradation
representation
speech,
or
increased
distracting
enters
competition
We
tested
these
alternative
hypotheses
using
magnetoencephalography
(MEG)
while
participants
listened
clear
presence
noise-vocoded
Crucially,
distractors
were
initially
but
became
more
after
short
training
session.
Results
showed
comprehension
was
poorer
than
before
training.
The
tracking
delta
range
(1-4
Hz)
reduced
strength
distractor.
In
contrast,
signals
not
significantly
modulated
by
intelligibility.
These
results
suggest
degrades
carried
oscillations.