Frontiers in Neuroscience,
Год журнала:
2022,
Номер
16
Опубликована: Июнь 14, 2022
Neural
entrainment
to
speech
appears
rely
on
syllabic
features,
especially
those
pertaining
the
acoustic
envelope
of
stimuli.
It
has
been
proposed
that
neural
tracking
depends
phoneme
features.
In
present
electroencephalography
experiment,
we
examined
data
from
25
participants
investigate
near-isochronous
stimuli
comprising
syllables
beginning
with
different
phonemes.
We
measured
inter-trial
phase
coherence
responses
these
and
assessed
relationship
between
this
properties
designed
quantify
their
"edginess."
found
was
across
classes
syllable-initial
depended
amount
"edge"
in
sound
envelope.
particular,
best
edge
marker
predictor
latency
maximum
derivative
each
syllable.
eNeuro,
Год журнала:
2025,
Номер
unknown, С. ENEURO.0287 - 24.2024
Опубликована: Янв. 3, 2025
A
comprehensive
analysis
of
everyday
sound
perception
can
be
achieved
using
Electroencephalography
(EEG)
with
the
concurrent
acquisition
information
about
environment.
While
extensive
research
has
been
dedicated
to
speech
perception,
complexities
auditory
within
environments,
specifically
types
and
key
features
extract,
remain
less
explored.
Our
study
aims
systematically
investigate
relevance
different
feature
categories:
discrete
sound-identity
markers,
general
cognitive
state
information,
acoustic
representations,
including
onset,
envelope,
mel-spectrogram.
Using
continuous
data
analysis,
we
contrast
in
terms
their
predictive
power
for
unseen
thus
distinct
contributions
explaining
neural
data.
For
this,
analyse
from
a
complex
audio-visual
motor
task
naturalistic
soundscape.
The
results
demonstrated
that
sets
explain
most
variability
were
combination
highly
detailed
description
specific
onsets.
Furthermore,
it
showed
established
applied
soundscapes.
Crucially,
outcome
hinged
on
excluding
periods
devoid
onsets
case
features.
highlights
importance
comprehensively
describe
soundscape,
nonacoustic
aspects,
fully
understand
dynamics
situations.
This
approach
serve
as
foundation
future
studies
aiming
natural
settings.
Significance
Statement
is
an
important
step
our
broader
endeavor,
which
life.
Although
conducted
stationary
setting,
this
provides
foundational
insights
into
necessary
environmental
obtain
responses.
We
delved
various
features,
labeling,
goal
refining
models
related
perception.
findings
particularly
highlight
need
thorough
considerations
across
contexts,
laboratory
settings
mobile
EEG
technologies,
paves
way
investigations
more
advancing
field
neuroscience.
Neuroscience & Biobehavioral Reviews,
Год журнала:
2025,
Номер
unknown, С. 106111 - 106111
Опубликована: Март 1, 2025
Hemispheric
lateralization
in
speech
and
language
processing
exemplifies
functional
brain
specialization.
Seminal
work
patients
with
left
hemisphere
damage
highlighted
the
left-hemispheric
dominance
functions.
However,
is
not
confined
to
hemisphere.
Hence,
some
researchers
associate
auditory
asymmetries:
slow
temporal
fine
spectral
acoustic
information
preferentially
processed
right
regions,
while
faster
primarily
handled
by
regions.
Other
scholars
posit
that
relates
more
linguistic
processing,
particularly
for
speech-like
stimuli.
We
argue
these
seemingly
distinct
accounts
are
interdependent.
Linguistic
analysis
of
relies
on
top-down
processes,
such
as
predictive
coding
dimension-selective
attention,
which
enhance
lateralized
engaging
left-lateralized
sensorimotor
networks.
Our
review
highlights
weaker
simple
sounds,
stronger
strongest
meaningful
speech.
Evidence
shows
selective
attention
lateralization.
illustrate
processes
rely
networks
provide
insights
into
role
processing.
PLoS ONE,
Год журнала:
2025,
Номер
20(5), С. e0320519 - e0320519
Опубликована: Май 8, 2025
Music
and
speech
encode
hierarchically
organized
structural
complexity
at
the
service
of
human
expressiveness
communication.
Previous
research
has
shown
that
populations
neurons
in
auditory
regions
track
envelope
acoustic
signals
within
range
slow
fast
oscillatory
activity.
However,
extent
to
which
cortical
tracking
is
influenced
by
interplay
between
stimulus
type,
frequency
band,
brain
anatomy
remains
an
open
question.
In
this
study,
we
reanalyzed
intracranial
recordings
from
thirty
subjects
implanted
with
electrocorticography
(ECoG)
grids
left
cerebral
hemisphere,
drawn
existing
open-access
ECoG
database.
Participants
passively
watched
a
movie
where
visual
scenes
were
accompanied
either
music
or
stimuli.
Cross-correlation
activity
signals,
along
density-based
clustering
analyses
linear
mixed-effects
modeling,
revealed
both
anatomically
overlapping
functionally
distinct
mapping
effect
as
function
type
band.
We
observed
widespread
left-hemisphere
Slow
Frequency
Band
(SFB,
band-passed
filtered
low-frequency
signal
1–8Hz),
near
zero
temporal
lags.
contrast,
High
(HFB,
70–120Hz
signal)
was
higher
during
perception,
more
densely
concentrated
classical
language
processing
areas,
showed
frontal-to-temporal
gradient
lag
values
not
perception
musical
Our
results
highlight
complex
interaction
region
band
shapes
dynamics
naturalistic
signals.
A
comprehensive
analysis
of
everyday
sound
perception
can
be
achieved
using
Electroencephalography
(EEG)
with
the
concurrent
acquisition
information
about
environment.While
extensive
research
has
been
dedicated
to
speech
perception,
complexities
auditory
within
environments,
specifically
types
and
key
features
extract,
remain
less
explored.
Our
study
aims
systematically
investigate
relevance
different
feature
categories:
discrete
sound-identity
markers,
general
cognitive
state
information,
acoustic
representations,
including
onset,
envelope,
mel-spectrogram.
Using
continuous
data
analysis,
we
contrast
methods
in
terms
their
predictive
power
for
unseen
data,
distinct
contributions
explaining
neural
data.
We
also
evaluate
results
considering
impact
context,
here
density
events.
For
this,
analyse
from
a
complex
audio-visual
motor
task
naturalistic
soundscape.
The
demonstrated
that
model
prediction
is
increased
more
acoustically
detailed
conjunction
description
identity
Crucially,
outcome
hinged
on
excluding
periods
devoid
onsets
case
features.
Furthermore,
showed
event
was
crucial
when
onsets.
highlights
importance
soundscape,
non-acoustic
aspects,
fully
understand
dynamics
situations.
This
approach
serve
as
foundation
future
studies
aiming
natural
settings.
Scientific Reports,
Год журнала:
2022,
Номер
12(1)
Опубликована: Авг. 5, 2022
Abstract
Recent
research
shows
that
adults’
neural
oscillations
track
the
rhythm
of
speech
signal.
However,
extent
to
which
this
tracking
is
driven
by
acoustics
signal,
or
language-specific
processing
remains
unknown.
Here
adult
native
listeners
three
rhythmically
different
languages
(English,
French,
Japanese)
were
compared
on
their
cortical
envelopes
synthesized
in
languages,
allowed
for
coding
at
each
language’s
dominant
rhythmic
unit,
respectively
foot
(2.5
Hz),
syllable
(5
mora
(10
Hz)
level.
The
language
groups
also
tested
with
a
sequence
non-native
language,
Polish,
and
non-speech
vocoded
equivalent,
investigate
possible
differential
speech/nonspeech
processing.
results
first
showed
was
most
prominent
5
Hz
(syllable
rate)
all
groups,
but
French
enhanced
English
Japanese
groups.
Second,
across
there
no
differences
responses
versus
rate),
better
than
10
(
not
rate).
Together
these
provide
evidence
both
language-general
influences
tracking.
Scientific Reports,
Год журнала:
2022,
Номер
12(1)
Опубликована: Янв. 10, 2022
Abstract
Acoustic
structures
associated
with
native-language
phonological
sequences
are
enhanced
within
auditory
pathways
for
perception,
although
the
underlying
mechanisms
not
well
understood.
To
elucidate
processes
that
facilitate
time–frequency
(T–F)
analyses
of
EEGs
obtained
from
native
speakers
English
and
Polish
were
conducted.
Participants
listened
to
same
different
nonword
pairs
counterbalanced
attend
passive
conditions.
Nonwords
contained
onsets
/pt/,
/pət/,
/st/,
/sət/
occur
in
both
languages
exception
which
never
occurs
language
word
onset.
Measures
spectral
power
inter-trial
phase
locking
(ITPL)
low
gamma
(LG)
theta-frequency
bands
analyzed
two
bilateral,
source-level
channels,
created
through
source
localization
modeling.
Results
revealed
significantly
larger
LG
listeners
unfamiliar
/pt/
right
hemisphere
at
early
cortical
stages,
during
condition.
Further,
ITPL
values
distinctive
responses
high
low-theta
acoustic
characteristics
onsets,
modulated
by
exposure.
These
findings,
language-specific
processing
acoustic-level
theta,
support
view
multi
scale
temporal
facilitates
speech
perception.
Neurobiology of Language,
Год журнала:
2023,
Номер
4(3), С. 435 - 454
Опубликована: Янв. 1, 2023
Abstract
Spontaneous
real-life
speech
is
imperfect
in
many
ways.
It
contains
disfluencies
and
ill-formed
utterances
has
a
highly
variable
rate.
When
listening
to
spontaneous
speech,
the
brain
needs
contend
with
these
features
order
extract
speaker’s
meaning.
Here,
we
studied
how
neural
response
affected
by
four
specific
factors
that
are
prevalent
colloquial
speech:
(1)
presence
of
fillers,
(2)
need
detect
syntactic
boundaries
disfluent
(3)
variability
Neural
activity
was
recorded
(using
electroencephalography)
from
individuals
as
they
listened
an
unscripted,
narrative,
which
analyzed
time-resolved
fashion
identify
fillers
boundaries.
considering
speech-tracking
analysis,
estimates
temporal
function
(TRF)
describe
relationship
between
stimulus
it
generates,
found
TRF
all
them.
This
observed
for
lexical
words
but
not
had
earlier
onset
opening
vs.
closing
clause
clauses
slower
rates.
These
findings
broaden
ongoing
efforts
understand
processing
under
increasingly
realistic
conditions.
They
highlight
importance
nature
spoken
language,
linking
past
research
on
linguistically
well-formed
meticulously
controlled
type
actually
deals
daily
basis.
European Journal of Neuroscience,
Год журнала:
2023,
Номер
59(3), С. 394 - 414
Опубликована: Дек. 27, 2023
Abstract
Human
speech
is
a
particularly
relevant
acoustic
stimulus
for
our
species,
due
to
its
role
of
information
transmission
during
communication.
Speech
inherently
dynamic
signal,
and
recent
line
research
focused
on
neural
activity
following
the
temporal
structure
speech.
We
review
findings
that
characterise
dynamics
in
processing
continuous
acoustics
allow
us
compare
these
with
aspects
human
highlight
properties
constraints
both
have,
suggesting
auditory
systems
are
optimised
process
then
discuss
speech‐specificity
their
potential
mechanistic
origins
summarise
open
questions
field.