Neurobiology of Language,
Journal Year:
2022,
Volume and Issue:
4(1), P. 29 - 52
Published: Oct. 4, 2022
Partial
speech
input
is
often
understood
to
trigger
rapid
and
automatic
activation
of
successively
higher-level
representations
words,
from
sound
meaning.
Here
we
show
evidence
magnetoencephalography
that
this
type
incremental
processing
limited
when
words
are
heard
in
isolation
as
compared
continuous
speech.
This
suggests
a
less
unified
word
recognition
process
than
assumed.
We
present
isolated
neural
effects
phoneme
probability,
quantified
by
surprisal,
significantly
stronger
(statistically
null)
phoneme-by-phoneme
lexical
uncertainty,
cohort
entropy.
In
contrast,
find
robust
both
entropy
surprisal
during
perception
connected
speech,
with
significant
interaction
between
the
contexts.
dissociation
rules
out
models
which
common
indicators
uniform
process,
even
though
these
closely
related
information-theoretic
measures
arise
probability
distribution
wordforms
consistent
input.
propose
reflect
access
lower
level
representation
auditory
(e.g.,
wordforms)
while
occurrence
task
sensitive,
driven
competition
or
engaged
late
(or
not
at
all)
single
words.
Proceedings of the National Academy of Sciences,
Journal Year:
2022,
Volume and Issue:
119(32)
Published: Aug. 3, 2022
Understanding
spoken
language
requires
transforming
ambiguous
acoustic
streams
into
a
hierarchy
of
representations,
from
phonemes
to
meaning.
It
has
been
suggested
that
the
brain
uses
prediction
guide
interpretation
incoming
input.
However,
role
in
processing
remains
disputed,
with
disagreement
about
both
ubiquity
and
representational
nature
predictions.
Here,
we
address
issues
by
analyzing
recordings
participants
listening
audiobooks,
using
deep
neural
network
(GPT-2)
precisely
quantify
contextual
First,
establish
responses
words
are
modulated
ubiquitous
Next,
disentangle
model-based
predictions
distinct
dimensions,
revealing
dissociable
signatures
syntactic
category
(parts
speech),
phonemes,
semantics.
Finally,
show
high-level
(word)
inform
low-level
(phoneme)
predictions,
supporting
hierarchical
predictive
processing.
Together,
these
results
underscore
processing,
showing
spontaneously
predicts
upcoming
at
multiple
levels
abstraction.
Speech
processing
is
highly
incremental.
It
widely
accepted
that
human
listeners
continuously
use
the
linguistic
context
to
anticipate
upcoming
concepts,
words,
and
phonemes.
However,
previous
evidence
supports
two
seemingly
contradictory
models
of
how
a
predictive
integrated
with
bottom-up
sensory
input:
Classic
psycholinguistic
paradigms
suggest
two-stage
process,
in
which
acoustic
input
initially
leads
local,
context-independent
representations,
are
then
quickly
contextual
constraints.
This
contrasts
view
brain
constructs
single
coherent,
unified
interpretation
input,
fully
integrates
available
information
across
representational
hierarchies,
thus
uses
constraints
modulate
even
earliest
representations.
To
distinguish
these
hypotheses,
we
tested
magnetoencephalography
responses
continuous
narrative
speech
for
signatures
local
models.
Results
provide
employ
both
types
parallel.
Two
uniquely
predict
some
part
early
neural
responses,
one
based
on
sublexical
phoneme
sequences,
phonemes
current
word
alone;
at
same
time,
also
reflect
model
incorporates
sentence-level
Neural
source
localization
places
anatomical
origins
different
nonidentical
parts
superior
temporal
lobes
bilaterally,
right
hemisphere
showing
relative
preference
more
These
results
recruits
parallel,
reconciling
disparate
findings.
Parallel
might
make
perceptual
system
robust,
facilitate
unexpected
inputs,
serve
function
language
acquisition.
Even
though
human
experience
unfolds
continuously
in
time,
it
is
not
strictly
linear;
instead,
entails
cascading
processes
building
hierarchical
cognitive
structures.
For
instance,
during
speech
perception,
humans
transform
a
varying
acoustic
signal
into
phonemes,
words,
and
meaning,
these
levels
all
have
distinct
but
interdependent
temporal
Time-lagged
regression
using
response
functions
(TRFs)
has
recently
emerged
as
promising
tool
for
disentangling
electrophysiological
brain
responses
related
to
such
complex
models
of
perception.
Here,
we
introduce
the
Eelbrain
Python
toolkit,
which
makes
this
kind
analysis
easy
accessible.
We
demonstrate
its
use,
continuous
sample
paradigm,
with
freely
available
EEG
dataset
audiobook
listening.
A
companion
GitHub
repository
provides
complete
source
code
analysis,
from
raw
data
group-level
statistics.
More
generally,
advocate
hypothesis-driven
approach
experimenter
specifies
hierarchy
time-continuous
representations
that
are
hypothesized
contributed
responses,
uses
those
predictor
variables
signal.
This
analogous
multiple
problem,
addition
time
dimension.
TRF
decomposes
associated
different
by
estimating
multivariate
(mTRF),
quantifying
influence
each
on
function
time(-lags).
allows
asking
two
questions
about
variables:
(1)
Is
there
significant
neural
representation
corresponding
variable?
And
if
so,
(2)
what
characteristics
it?
Thus,
can
be
systematically
combined
evaluated
jointly
model
processing
at
levels.
discuss
applications
approach,
including
potential
linking
algorithmic/representational
theories
through
computational
appropriate
hypotheses.
Journal of Neuroscience,
Journal Year:
2022,
Volume and Issue:
42(39), P. 7442 - 7453
Published: Aug. 30, 2022
When
listening
to
continuous
speech,
the
human
brain
can
track
features
of
presented
speech
signal.
It
has
been
shown
that
neural
tracking
acoustic
is
a
prerequisite
for
understanding
and
predict
in
controlled
circumstances.
However,
also
tracks
linguistic
which
may
be
more
directly
related
understanding.
We
investigated
processing
as
function
varying
by
manipulating
rate.
In
this
paradigm,
affected
simultaneously
but
opposite
directions:
rate
increases,
information
per
second
present.
contrast,
becomes
challenging
when
less
intelligible
at
higher
rates.
measured
EEG
18
participants
(4
male)
who
listened
various
As
expected
confirmed
behavioral
results,
decreased
with
increasing
Accordingly,
rate,
increased.
This
indicates
representations
capture
gradual
effect
decreasing
addition,
increased
does
not
necessarily
imply
better
suggests
that,
although
measure
because
low
signal-to-noise
ratio,
direct
predictor
SIGNIFICANCE
STATEMENT
An
increasingly
popular
method
investigate
tracking.
Although
much
research
done
on
how
features,
have
received
attention.
study,
we
disentangled
characteristics
via
A
proper
way
objectively
measuring
auditory
language
paves
toward
clinical
applications:
objective
would
allow
behavioral-free
evaluation
understanding,
allows
evaluate
hearing
loss
adjust
aids
based
responses.
benefit
populations
from
whom
obtaining
measures
complex,
such
young
children
or
people
cognitive
impairments.
Scientific Reports,
Journal Year:
2023,
Volume and Issue:
13(1)
Published: Jan. 16, 2023
Abstract
To
investigate
the
processing
of
speech
in
brain,
commonly
simple
linear
models
are
used
to
establish
a
relationship
between
brain
signals
and
features.
However,
these
ill-equipped
model
highly-dynamic,
complex
non-linear
system
like
they
often
require
substantial
amount
subject-specific
training
data.
This
work
introduces
novel
decoder
architecture:
Very
Large
Augmented
Auditory
Inference
(VLAAI)
network.
The
VLAAI
network
outperformed
state-of-the-art
subject-independent
(median
Pearson
correlation
0.19,
p
<
0.001),
yielding
an
increase
over
well-established
by
52%.
Using
ablation
techniques,
we
identified
relative
importance
each
part
found
that
components
output
context
module
influenced
performance
most
(10%
increase).
Subsequently,
was
evaluated
on
holdout
dataset
26
subjects
publicly
available
unseen
test
generalization
for
stimuli.
No
significant
difference
default
subjects,
set
public
dataset.
also
significantly
all
baseline
We
effect
size
data
from
1
up
80
revealing
following
hyperbolic
tangent
function
number
subjects.
Finally,
finetuned
obtain
models.
With
5
minutes
or
more,
improvement
found,
34%
(from
0.18
0.25
median
correlation)
with
regards
Human Brain Mapping,
Journal Year:
2024,
Volume and Issue:
45(8)
Published: May 26, 2024
Abstract
Aphasia
is
a
communication
disorder
that
affects
processing
of
language
at
different
levels
(e.g.,
acoustic,
phonological,
semantic).
Recording
brain
activity
via
Electroencephalography
while
people
listen
to
continuous
story
allows
analyze
responses
acoustic
and
linguistic
properties
speech.
When
the
neural
aligns
with
these
speech
properties,
it
referred
as
tracking.
Even
though
measuring
tracking
may
present
an
interesting
approach
studying
aphasia
in
ecologically
valid
way,
has
not
yet
been
investigated
individuals
stroke‐induced
aphasia.
Here,
we
explored
representations
chronic
phase
after
stroke
age‐matched
healthy
controls.
We
found
decreased
(envelope
envelope
onsets)
In
addition,
word
surprisal
displayed
amplitudes
around
195
ms
over
frontal
electrodes,
although
this
effect
was
corrected
for
multiple
comparisons.
These
results
show
there
potential
capture
impairments
by
However,
more
research
needed
validate
results.
Nonetheless,
exploratory
study
shows
naturalistic,
presents
powerful
Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: Jan. 8, 2024
Music
and
speech
are
encountered
daily
unique
to
human
beings.
Both
transformed
by
the
auditory
pathway
from
an
initial
acoustical
encoding
higher
level
cognition.
Studies
of
cortex
have
revealed
distinct
brain
responses
music
speech,
but
differences
may
emerge
in
or
be
inherited
different
subcortical
encoding.
In
first
part
this
study,
we
derived
brainstem
response
(ABR),
a
measure
encoding,
recorded
using
two
analysis
methods.
The
method,
described
previously
acoustically
based,
yielded
very
ABRs
between
sound
classes.
second
however,
developed
here
based
on
physiological
model
periphery,
gave
highly
correlated
speech.
We
determined
superiority
method
through
several
metrics,
suggesting
there
is
no
appreciable
impact
stimulus
class
(i.e.,
vs
speech)
way
acoustics
encoded
subcortically.
study's
part,
considered
cortex.
Our
new
resulted
cortical
becoming
more
similar
with
remaining
differences.
results
taken
together
suggest
that
evidence
for
stimulus-class
dependent
processing
at
not
level.
PLoS Computational Biology,
Journal Year:
2021,
Volume and Issue:
17(9), P. e1009358 - e1009358
Published: Sept. 17, 2021
The
human
brain
tracks
amplitude
fluctuations
of
both
speech
and
music,
which
reflects
acoustic
processing
in
addition
to
the
encoding
higher-order
features
one’s
cognitive
state.
Comparing
neural
tracking
music
envelopes
can
elucidate
stimulus-general
mechanisms,
but
direct
comparisons
are
confounded
by
differences
their
envelope
spectra.
Here,
we
use
a
novel
method
frequency-constrained
reconstruction
stimulus
using
EEG
recorded
during
passive
listening.
We
expected
see
match
narrow
range
frequencies,
instead
found
that
was
reconstructed
better
than
for
all
frequencies
examined.
Additionally,
models
trained
on
types
performed
as
well
or
stimulus-specific
at
higher
modulation
suggesting
common
mechanism
music.
However,
low
below
1
Hz,
associated
with
increased
weighting
over
parietal
channels,
not
present
other
stimuli.
Our
results
highlight
importance
low-frequency
suggest
an
origin
from
speech-specific
brain.
Frontiers in Neurology,
Journal Year:
2022,
Volume and Issue:
13
Published: May 20, 2022
Peripheral
nerve
injury
(PNI)
is
very
common
in
clinical
practice,
which
often
reduces
the
quality
of
life
patients
and
imposes
a
serious
medical
burden
on
society.
However,
to
date,
there
have
been
no
bibliometric
analyses
PNI
field
from
2017
2021.
This
study
aimed
provide
comprehensive
overview
current
state
research
frontier
trends
perspective.Articles
reviews
2021
were
extracted
Web
Science
database.
An
online
platform,
CiteSpace,
VOSviewer
software
used
generate
viewable
views
perform
co-occurrence
analysis,
co-citation
burst
analysis.
The
quantitative
indicators
such
as
number
publications,
citation
frequency,
h-index,
impact
factor
journals
analyzed
by
using
functions
"Create
Citation
Report"
"Journal
Reports"
Database
Excel
software.A
total
4,993
papers
was
identified.
annual
publications
remained
high,
with
an
average
more
than
998
per
year.
citations
increased
year
year,
high
22,272
United
States
China
had
significant
influence
field.
Johns
Hopkins
University,
USA
leading
position
this
JESSEN
KR
JOURNAL
OF
NEUROSCIENCE
most
influential
authors
field,
respectively.
Meanwhile,
we
found
that
hot
topics
focused
dorsal
root
ganglion
(DRG)
satellite
glial
cells
(SGCs)
for
neuropathic
pain
relief
combining
tissue
engineering
techniques
controlling
repair
Schwann
cell
phenotype
promote
regeneration,
are
not
only
focus
now
but
also
forecast
be
continued
future.This
first
conduct
analysis
related
2021,
whose
results
can
reliable
source
researchers
quickly
understand
key
information
identify
potential
frontiers
directions.
NeuroImage,
Journal Year:
2022,
Volume and Issue:
267, P. 119841 - 119841
Published: Dec. 28, 2022
Background:
Older
adults
process
speech
differently,
but
it
is
not
yet
clear
how
aging
affects
different
levels
of
processing
natural,
continuous
speech,
both
in
terms
bottom-up
acoustic
analysis
and
top-down
generation
linguistic-based
predictions.
We
studied
natural
across
the
adult
lifespan
via
electroencephalography
(EEG)
measurements
neural
tracking.
Goals:
Our
goals
are
to
analyze
unique
contribution
linguistic
using
while
controlling
for
influence
processing.
Moreover,
we
also
age.
In
particular,
focus
on
changes
spatial
temporal
activation
patterns
response
lifespan.
Methods:
52
normal-hearing
between
17
82
years
age
listened
a
naturally
spoken
story
EEG
signal
was
recorded.
investigated
effect
speech.
Because
correlated
with
hearing
capacity
measures
cognition,
whether
observed
mediated
by
these
factors.
Furthermore,
there
an
hemisphere
lateralization
spatiotemporal
responses.
Results:
results
showed
that
declines
advancing
as
increased,
latency
certain
aspects
increased.
Also
tracking
(NT)
decreased
increasing
age,
which
at
odds
literature.
contrast
processing,
older
subjects
shorter
latencies
early
responses
No
evidence
found
hemispheric
neither
younger
nor
during
Most
effects
were
explained
age-related
decline
or
cognition.
However,
our
suggest
decreasing
word-level
partially
due
cognition
than
robust
Conclusion:
Spatial
characteristics
change
These
may
be
traces
structural
and/or
functional
occurs