Human song: Separate neural pathways for melody and speech DOI Creative Commons
Liberty S. Hamilton

Current Biology, Journal Year: 2022, Volume and Issue: 32(7), P. R311 - R313

Published: April 1, 2022

Language: Английский

Neural tracking as a diagnostic tool to assess the auditory pathway DOI
Marlies Gillis, Jana Van Canneyt, Tom Francart

et al.

Hearing Research, Journal Year: 2022, Volume and Issue: 426, P. 108607 - 108607

Published: Sept. 14, 2022

Language: Английский

Citations

44

Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience DOI Creative Commons
Kristin Weineck,

Olivia Xin Wen,

Molly J. Henry

et al.

eLife, Journal Year: 2022, Volume and Issue: 11

Published: Sept. 12, 2022

Neural activity in the auditory system synchronizes to sound rhythms, and brain-environment synchronization is thought be fundamental successful perception. Sound rhythms are often operationalized terms of sound's amplitude envelope. We hypothesized that - especially for music envelope might not best capture complex spectro-temporal fluctuations give rise beat perception synchronized neural activity. This study investigated (1) different musical features, (2) tempo-dependence synchronization, (3) dependence on familiarity, enjoyment, ease In this electroencephalography study, 37 human participants listened tempo-modulated (1-4 Hz). Independent whether analysis approach was based temporal response functions (TRFs) or reliable components (RCA), spectral flux as opposed evoked strongest synchronization. Moreover, with slower rates, high easy-to-perceive beats elicited response. Our results demonstrate importance driving highlight its sensitivity tempo, salience.

Language: Английский

Citations

41

Neural decoding of music from the EEG DOI Creative Commons
Ian Daly

Scientific Reports, Journal Year: 2023, Volume and Issue: 13(1)

Published: Jan. 12, 2023

Abstract Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated decoders that are able accoustic information from a variety signal types including electrocortiography (ECoG) and the electroencephalogram (EEG). In this study we explore how functional magnetic resonance imaging (fMRI) combined with EEG develop an decoder. Specifically, first joint EEG-fMRI paradigm record brain activity while participants listened music. We then fMRI-informed source localisation bi-directional long-term short term deep learning network extract related music listening reconstruct individual pieces was to. further validated our model by evaluating its performance on separate dataset EEG-only recordings. were music, via analysis approach, mean rank accuracy 71.8% ( $$n~=~18$$ n = 18 , $$p~<~0.05$$ p < 0.05 ). Using only data, without participant specific analysis, identify 59.2% $$n~=~19$$ 19 This demonstrates may use aid based reconstruction acoustic makes step towards building EEG-based for other complex domains such as

Language: Английский

Citations

23

Subcortical responses to music and speech are alike while cortical responses diverge DOI Creative Commons
Shan Tong, Madeline S. Cappelloni, Ross K. Maddox

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Jan. 8, 2024

Music and speech are encountered daily unique to human beings. Both transformed by the auditory pathway from an initial acoustical encoding higher level cognition. Studies of cortex have revealed distinct brain responses music speech, but differences may emerge in or be inherited different subcortical encoding. In first part this study, we derived brainstem response (ABR), a measure encoding, recorded using two analysis methods. The method, described previously acoustically based, yielded very ABRs between sound classes. second however, developed here based on physiological model periphery, gave highly correlated speech. We determined superiority method through several metrics, suggesting there is no appreciable impact stimulus class (i.e., vs speech) way acoustics encoded subcortically. study's part, considered cortex. Our new resulted cortical becoming more similar with remaining differences. results taken together suggest that evidence for stimulus-class dependent processing at not level.

Language: Английский

Citations

12

The effects of speech masking on neural tracking of acoustic and semantic features of natural speech DOI

Sonia Yasmin,

Vanessa C. Irsik,

Ingrid S. Johnsrude

et al.

Neuropsychologia, Journal Year: 2023, Volume and Issue: 186, P. 108584 - 108584

Published: May 9, 2023

Language: Английский

Citations

17

Speech and music recruit frequency-specific distributed and overlapping cortical networks DOI Creative Commons
Noémie te Rietmolen, Manuel Mercier, Agnès Trébuchon

et al.

eLife, Journal Year: 2024, Volume and Issue: 13

Published: Feb. 8, 2024

To what extent does speech and music processing rely on domain-specific domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous or music, we investigated the presence of frequency-specific network-level brain activity. We combined it with a statistical approach which clear operational distinction is made between shared, preferred, domain-selective responses. show that majority focal activity shared processing. Our data also reveal an absence anatomical regional selectivity. Instead, responses are restricted distributed coherent oscillations, typical spectral fingerprints. work highlights importance considering natural stimuli dynamics their full complexity map cognitive functions.

Language: Английский

Citations

5

Cortical activity during naturalistic music listening reflects short-range predictions based on long-term experience DOI Creative Commons
P Kern, Micha Heilbron, Floris P. de Lange

et al.

eLife, Journal Year: 2022, Volume and Issue: 11

Published: Dec. 23, 2022

Expectations shape our experience of music. However, the internal model upon which listeners form melodic expectations is still debated. Do stem from Gestalt-like principles or statistical learning? If latter, does long-term play an important role, are short-term regularities sufficient? And finally, what length context informs contextual expectations? To answer these questions, we presented human with diverse naturalistic compositions Western classical music, while recording neural activity using MEG. We quantified note-level surprise and uncertainty various computational models including a state-of-the-art transformer network. A time-resolved regression analysis revealed that over fronto-temporal sensors tracked particularly around 200ms 300–500ms after note onset. This response was dissociated sensory-acoustic adaptation effects. Neural best predicted by incorporated learning—rather than simple, principles. Yet, intriguingly, reflected primarily short-range musical contexts less ten notes. present full replication novel MEG results in openly available EEG dataset. Together, elucidate shapes predictions during music listening.

Language: Английский

Citations

20

Speech and music recruit frequency-specific distributed and overlapping cortical networks DOI Creative Commons
Noémie te Rietmolen, Manuel Mercier, Agnès Trébuchon

et al.

eLife, Journal Year: 2024, Volume and Issue: 13

Published: July 22, 2024

To what extent does speech and music processing rely on domain-specific domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous or music, we investigated the presence of frequency-specific network-level brain activity. We combined it with a statistical approach which clear operational distinction is made between shared , preferred, domain- selective responses. show that majority focal activity processing. Our data also reveal an absence anatomical regional selectivity. Instead, domain-selective responses are restricted distributed coherent oscillations, typical spectral fingerprints. work highlights importance considering natural stimuli dynamics their full complexity map cognitive functions.

Language: Английский

Citations

4

Minimal background noise enhances neural speech tracking: Evidence of stochastic resonance DOI Open Access

Björn Herrmann

Published: March 10, 2025

Neural activity in auditory cortex tracks the amplitude-onset envelope of continuous speech, but recent work counter-intuitively suggests that neural tracking increases when speech is masked by background noise, despite reduced intelligibility. Noise-related amplification could indicate stochastic resonance – response facilitation through noise supports tracking, a comprehensive account lacking. In five human electroencephalography (EEG) experiments, current study demonstrates generalized enhancement due to minimal noise. Results show a) enhanced for at very high SNRs (∼30 dB SNR) where highly intelligible; b) this independent attention; c) it generalizes across different stationary maskers, strongest 12-talker babble; and d) present headphone free-field listening, suggesting neural-tracking real-life listening. The paints clear picture enhances representation onset-envelope, contributes tracking. further highlights non-linearities induced make its use as biological marker processing challenging.

Language: Английский

Citations

0

Cortical and behavioral tracking of rhythm in music: Effects of pitch predictability, enjoyment, and expertise DOI Creative Commons
Anne Keitel, Claire Pelofi, Xinyi Guan

et al.

Annals of the New York Academy of Sciences, Journal Year: 2025, Volume and Issue: unknown

Published: March 18, 2025

Abstract The cortical tracking of stimulus features is a crucial neural requisite how we process continuous music. We here tested whether the beat, typically related to rhythm processing, modulated by pitch predictability and other top‐down factors. Participants listened tonal (high predictability) atonal (low music while undergoing electroencephalography. analyzed their acoustic envelope. Cortical envelope was stronger listening music, potentially reflecting listeners’ violated expectations increased attention allocation. Envelope also with more expertise enjoyment. Furthermore, showed surprisal (using IDyOM), which suggests that match those computed IDyOM model, higher for Behaviorally, measured participants’ ability finger‐tap beat sequences in two experiments. Finger‐tapping performance better condition, indicating positive effect on behavioral processing. predicted tapping as did pitch‐surprisal high low might impose different processing regimes. Taken together, our results show various ways factors impact musical

Language: Английский

Citations

0