Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: June 7, 2024
Speech-in-noise
(SIN)
perception
is
a
primary
complaint
of
individuals
with
audiometric
hearing
loss.
SIN
performance
varies
drastically,
even
among
normal
hearing.
The
present
genome-wide
association
study
(GWAS)
investigated
the
genetic
basis
deficits
in
self-reported
quiet
situations.
GWAS
was
performed
on
279,911
from
UB
Biobank
cohort,
58,847
reporting
despite
quiet.
identified
996
single
nucleotide
polymorphisms
(SNPs),
achieving
significance
(p
<
5*10-8)
across
four
genomic
loci.
720
SNPs
21
loci
achieved
suggestive
10-6).
signals
were
enriched
brain
tissues,
such
as
anterior
cingulate
cortex,
dorsolateral
prefrontal
entorhinal
frontal
hippocampus,
and
inferior
temporal
cortex.
Cochlear
cell
types
revealed
no
significant
deficits.
associated
various
health
traits,
including
neuropsychiatric,
sensory,
cognitive,
metabolic,
cardiovascular,
inflammatory
conditions.
A
replication
analysis
conducted
242
healthy
young
adults.
Self-reported
speech
perception,
thresholds
(0.25-16
kHz),
distortion
product
otoacoustic
emissions
(1-16
kHz)
utilized
for
analysis.
73
replicated
measure.
211
at
least
one
66
two
audiological
measures.
12
near
or
within
MAPT,
GRM3,
HLA-DQA1
all
highlighted
polygenic
architecture
underlying
Cerebral Cortex,
Journal Year:
2024,
Volume and Issue:
34(2)
Published: Jan. 11, 2024
Abstract
Plasticity
from
auditory
experience
shapes
the
brain’s
encoding
and
perception
of
sound.
However,
whether
such
long-term
plasticity
alters
trajectory
short-term
during
speech
processing
has
yet
to
be
investigated.
Here,
we
explored
neural
mechanisms
interplay
between
short-
neuroplasticity
for
rapid
perceptual
learning
concurrent
sounds
in
young,
normal-hearing
musicians
nonmusicians.
Participants
learned
identify
double-vowel
mixtures
~
45
min
training
sessions
recorded
simultaneously
with
high-density
electroencephalography
(EEG).
We
analyzed
frequency-following
responses
(FFRs)
event-related
potentials
(ERPs)
investigate
correlates
at
subcortical
cortical
levels,
respectively.
Although
both
groups
showed
learning,
faster
behavioral
decisions
than
nonmusicians
overall.
Learning-related
changes
were
not
apparent
brainstem
FFRs.
was
highly
evident
cortex,
where
ERPs
revealed
unique
hemispheric
asymmetries
suggestive
different
strategies
(musicians:
right
hemisphere
bias;
nonmusicians:
left
hemisphere).
Source
reconstruction
early
(150–200
ms)
time
course
these
effects
localized
learning-induced
auditory-sensory
brain
areas.
Our
findings
reinforce
domain-general
benefits
musicianship
but
reveal
that
successful
sound
is
driven
by
a
critical
long-
plasticity,
which
first
emerge
level.
European Journal of Neuroscience,
Journal Year:
2025,
Volume and Issue:
61(9)
Published: April 28, 2025
ABSTRACT
Plasticity
from
auditory
experience
shapes
the
brain's
encoding
and
perception
of
sound.
Though
stronger
neural
entrainment
(i.e.,
brain‐to‐acoustic
synchronization)
aids
speech
perception,
underlying
oscillatory
activity
may
uniquely
interact
with
long‐term
experiences
music
training)
short‐term
plasticity
during
concurrent
perception.
Here,
we
explored
rapid
perceptual
learning
sounds
in
normal‐hearing
young
adults
who
differed
their
amount
self‐reported
training
(defined
as
“musicians”
“nonmusicians”).
Participants
learned
to
identify
double‐vowel
mixtures
~45
min
sessions
high‐density
EEG
recordings.
We
analyzed
alpha‐band
power
(7–12
Hz)
following
a
rhythmic
speech‐stimulus
train
(~9
preceding
behavioral
identification
determine
whether
increased
(brain‐to‐speech
entrainment)
or
decreased
alpha
(alpha‐band
suppression)
corresponded
task
success.
Source
directed
functional
connectivity
analyses
data
probed
behavior
was
driven
by
group
differences
auditory‐motor
coupling.
Both
groups
improved
training.
Listeners'
prior
target
predicted
performance;
surprisingly,
oscillations
were
observed
incorrect
compared
correct
trial
responses.
also
found
stark
hemispheric
biases
coupling,
greater
right
left
hemisphere
for
musicians
(R
>
L)
but
not
nonmusicians
=
L).
Stronger
responses
supports
notion
that
(~10
suppression
is
an
important
modulator
trial‐by‐trial
success
processing.
Our
findings
suggest
impact
Frontiers in Neuroscience,
Journal Year:
2022,
Volume and Issue:
16
Published: July 22, 2022
Spoken
language
comprehension
requires
rapid
and
continuous
integration
of
information,
from
lower-level
acoustic
to
higher-level
linguistic
features.
Much
this
processing
occurs
in
the
cerebral
cortex.
Its
neural
activity
exhibits,
for
instance,
correlates
predictive
processing,
emerging
at
delays
a
few
100
ms.
However,
auditory
pathways
are
also
characterized
by
extensive
feedback
loops
cortical
areas
ones
as
well
subcortical
structures.
Early
can
therefore
be
influenced
cognitive
processes,
but
it
remains
unclear
whether
such
contributes
processing.
Here,
we
investigated
early
speech-evoked
that
emerges
fundamental
frequency.
We
analyzed
EEG
recordings
obtained
when
subjects
listened
story
read
single
speaker.
identified
response
tracking
speaker's
frequency
occurred
delay
11
ms,
while
another
elicited
high-frequency
modulation
envelope
higher
harmonics
exhibited
larger
magnitude
longer
latency
about
18
ms
with
an
additional
significant
component
around
40
Notably,
earlier
components
likely
originate
structures,
latter
presumably
involves
contributions
regions.
Subsequently,
determined
these
responses
each
individual
word
story.
then
quantified
context-independent
used
model
compute
context-dependent
surprisal
precision.
The
represented
how
predictable
is,
given
previous
context,
precision
reflected
confidence
predicting
next
past
context.
found
word-level
were
predominantly
features:
average
its
variability.
Amongst
features,
only
showed
weak
modulation.
Our
results
show
is
already
suggesting
top-down
response.
NeuroImage,
Journal Year:
2023,
Volume and Issue:
269, P. 119899 - 119899
Published: Jan. 28, 2023
The
brain
transforms
continuous
acoustic
events
into
discrete
category
representations
to
downsample
the
speech
signal
for
our
perceptual-cognitive
systems.
Such
phonetic
categories
are
highly
malleable,
and
their
percepts
can
change
depending
on
surrounding
stimulus
context.
Previous
work
suggests
these
acoustic-phonetic
mapping
perceptual
warping
of
emerge
in
no
earlier
than
auditory
cortex.
Here,
we
examined
whether
auditory-category
phenomena
inherent
perception
occur
even
human
brain,
at
level
brainstem.
We
recorded
speech-evoked
frequency
following
responses
(FFRs)
during
a
task
designed
induce
more/less
listeners'
presentation
order
continuum
(random,
forward,
backward
directions).
used
novel
clustered
paradigm
rapidly
record
high
trial
counts
needed
FFRs
concurrent
with
active
behavioral
tasks.
found
serial
caused
shifts
(hysteresis)
near
boundary
confirming
identical
tokens
perceived
differentially
Critically,
further
show
neural
(but
not
passive)
listening
enhanced
prototypical
vs.
category-ambiguous
biased
direction
label
acoustically-identical
stimuli.
These
findings
were
observed
acoustics
nor
model
FFR
generated
via
computational
cochlear
nerve
transduction,
central
origin
effects.
Our
data
reveal
carry
category-level
information
suggest
top-down
processing
actively
shapes
encoding
categorization
subcortical
levels.
surprisingly
early
along
neuroaxis,
which
might
aid
understanding
by
reducing
ambiguity
signal.
Frontiers in Neuroscience,
Journal Year:
2024,
Volume and Issue:
18
Published: July 8, 2024
The
frequency-following
response
(FFR)
is
an
evoked
potential
that
provides
a
neural
index
of
complex
sound
encoding
in
the
brain.
FFRs
have
been
widely
used
to
characterize
speech
and
music
processing,
experience-dependent
neuroplasticity
(e.g.,
learning
musicianship),
biomarkers
for
hearing
language-based
disorders
distort
receptive
communication
abilities.
It
assumed
stem
from
mixture
phase-locked
neurogenic
activity
brainstem
cortical
structures
along
neuraxis.
In
this
study,
we
challenge
prevailing
view
by
demonstrating
upwards
~50%
FFR
can
originate
unexpected
myogenic
source:
contamination
postauricular
muscle
(PAM)
vestigial
startle
reflex.
We
measured
PAM,
transient
auditory
responses
(ABRs),
sustained
potentials
reflecting
(ABR/FFR)
young,
normal-hearing
listeners
with
varying
degrees
musical
training.
first
establish
PAM
artifact
present
all
ears,
varies
electrode
proximity
muscle,
be
experimentally
manipulated
directing
listeners'
eye
gaze
toward
ear
stimulation.
then
show
muscular
noise
easily
confounds
FFRs,
spuriously
amplifying
3–4-fold
tandem
contraction
even
explaining
putative
enhancements
observed
highly
skilled
musicians.
Our
findings
expose
new
unrecognized
source
drives
its
large
inter-subject
variability
cast
doubt
on
whether
changes
typically
attributed
neuroplasticity/pathology
are
solely
brain
origin.