
Current Biology, Journal Year: 2022, Volume and Issue: 32(7), P. R311 - R313
Published: April 1, 2022
Language: Английский
Current Biology, Journal Year: 2022, Volume and Issue: 32(7), P. R311 - R313
Published: April 1, 2022
Language: Английский
Hearing Research, Journal Year: 2022, Volume and Issue: 426, P. 108607 - 108607
Published: Sept. 14, 2022
Language: Английский
Citations
44eLife, Journal Year: 2022, Volume and Issue: 11
Published: Sept. 12, 2022
Neural activity in the auditory system synchronizes to sound rhythms, and brain-environment synchronization is thought be fundamental successful perception. Sound rhythms are often operationalized terms of sound's amplitude envelope. We hypothesized that - especially for music envelope might not best capture complex spectro-temporal fluctuations give rise beat perception synchronized neural activity. This study investigated (1) different musical features, (2) tempo-dependence synchronization, (3) dependence on familiarity, enjoyment, ease In this electroencephalography study, 37 human participants listened tempo-modulated (1-4 Hz). Independent whether analysis approach was based temporal response functions (TRFs) or reliable components (RCA), spectral flux as opposed evoked strongest synchronization. Moreover, with slower rates, high easy-to-perceive beats elicited response. Our results demonstrate importance driving highlight its sensitivity tempo, salience.
Language: Английский
Citations
41Scientific Reports, Journal Year: 2023, Volume and Issue: 13(1)
Published: Jan. 12, 2023
Abstract
Neural
decoding
models
can
be
used
to
decode
neural
representations
of
visual,
acoustic,
or
semantic
information.
Recent
studies
have
demonstrated
decoders
that
are
able
accoustic
information
from
a
variety
signal
types
including
electrocortiography
(ECoG)
and
the
electroencephalogram
(EEG).
In
this
study
we
explore
how
functional
magnetic
resonance
imaging
(fMRI)
combined
with
EEG
develop
an
decoder.
Specifically,
first
joint
EEG-fMRI
paradigm
record
brain
activity
while
participants
listened
music.
We
then
fMRI-informed
source
localisation
bi-directional
long-term
short
term
deep
learning
network
extract
related
music
listening
reconstruct
individual
pieces
was
to.
further
validated
our
model
by
evaluating
its
performance
on
separate
dataset
EEG-only
recordings.
were
music,
via
analysis
approach,
mean
rank
accuracy
71.8%
(
$$n~=~18$$
Language: Английский
Citations
23Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)
Published: Jan. 8, 2024
Music and speech are encountered daily unique to human beings. Both transformed by the auditory pathway from an initial acoustical encoding higher level cognition. Studies of cortex have revealed distinct brain responses music speech, but differences may emerge in or be inherited different subcortical encoding. In first part this study, we derived brainstem response (ABR), a measure encoding, recorded using two analysis methods. The method, described previously acoustically based, yielded very ABRs between sound classes. second however, developed here based on physiological model periphery, gave highly correlated speech. We determined superiority method through several metrics, suggesting there is no appreciable impact stimulus class (i.e., vs speech) way acoustics encoded subcortically. study's part, considered cortex. Our new resulted cortical becoming more similar with remaining differences. results taken together suggest that evidence for stimulus-class dependent processing at not level.
Language: Английский
Citations
12Neuropsychologia, Journal Year: 2023, Volume and Issue: 186, P. 108584 - 108584
Published: May 9, 2023
Language: Английский
Citations
17eLife, Journal Year: 2024, Volume and Issue: 13
Published: Feb. 8, 2024
To what extent does speech and music processing rely on domain-specific domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous or music, we investigated the presence of frequency-specific network-level brain activity. We combined it with a statistical approach which clear operational distinction is made between shared, preferred, domain-selective responses. show that majority focal activity shared processing. Our data also reveal an absence anatomical regional selectivity. Instead, responses are restricted distributed coherent oscillations, typical spectral fingerprints. work highlights importance considering natural stimuli dynamics their full complexity map cognitive functions.
Language: Английский
Citations
5eLife, Journal Year: 2022, Volume and Issue: 11
Published: Dec. 23, 2022
Expectations shape our experience of music. However, the internal model upon which listeners form melodic expectations is still debated. Do stem from Gestalt-like principles or statistical learning? If latter, does long-term play an important role, are short-term regularities sufficient? And finally, what length context informs contextual expectations? To answer these questions, we presented human with diverse naturalistic compositions Western classical music, while recording neural activity using MEG. We quantified note-level surprise and uncertainty various computational models including a state-of-the-art transformer network. A time-resolved regression analysis revealed that over fronto-temporal sensors tracked particularly around 200ms 300–500ms after note onset. This response was dissociated sensory-acoustic adaptation effects. Neural best predicted by incorporated learning—rather than simple, principles. Yet, intriguingly, reflected primarily short-range musical contexts less ten notes. present full replication novel MEG results in openly available EEG dataset. Together, elucidate shapes predictions during music listening.
Language: Английский
Citations
20eLife, Journal Year: 2024, Volume and Issue: 13
Published: July 22, 2024
To what extent does speech and music processing rely on domain-specific domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous or music, we investigated the presence of frequency-specific network-level brain activity. We combined it with a statistical approach which clear operational distinction is made between shared , preferred, domain- selective responses. show that majority focal activity processing. Our data also reveal an absence anatomical regional selectivity. Instead, domain-selective responses are restricted distributed coherent oscillations, typical spectral fingerprints. work highlights importance considering natural stimuli dynamics their full complexity map cognitive functions.
Language: Английский
Citations
4Published: March 10, 2025
Neural activity in auditory cortex tracks the amplitude-onset envelope of continuous speech, but recent work counter-intuitively suggests that neural tracking increases when speech is masked by background noise, despite reduced intelligibility. Noise-related amplification could indicate stochastic resonance – response facilitation through noise supports tracking, a comprehensive account lacking. In five human electroencephalography (EEG) experiments, current study demonstrates generalized enhancement due to minimal noise. Results show a) enhanced for at very high SNRs (∼30 dB SNR) where highly intelligible; b) this independent attention; c) it generalizes across different stationary maskers, strongest 12-talker babble; and d) present headphone free-field listening, suggesting neural-tracking real-life listening. The paints clear picture enhances representation onset-envelope, contributes tracking. further highlights non-linearities induced make its use as biological marker processing challenging.
Language: Английский
Citations
0Annals of the New York Academy of Sciences, Journal Year: 2025, Volume and Issue: unknown
Published: March 18, 2025
Abstract The cortical tracking of stimulus features is a crucial neural requisite how we process continuous music. We here tested whether the beat, typically related to rhythm processing, modulated by pitch predictability and other top‐down factors. Participants listened tonal (high predictability) atonal (low music while undergoing electroencephalography. analyzed their acoustic envelope. Cortical envelope was stronger listening music, potentially reflecting listeners’ violated expectations increased attention allocation. Envelope also with more expertise enjoyment. Furthermore, showed surprisal (using IDyOM), which suggests that match those computed IDyOM model, higher for Behaviorally, measured participants’ ability finger‐tap beat sequences in two experiments. Finger‐tapping performance better condition, indicating positive effect on behavioral processing. predicted tapping as did pitch‐surprisal high low might impose different processing regimes. Taken together, our results show various ways factors impact musical
Language: Английский
Citations
0