Human song: Separate neural pathways for melody and speech DOI Creative Commons
Liberty S. Hamilton

Current Biology, Journal Year: 2022, Volume and Issue: 32(7), P. R311 - R313

Published: April 1, 2022

Language: Английский

Enhanced neural speech tracking through noise indicates stochastic resonance in humans DOI Creative Commons
Björn Herrmann

eLife, Journal Year: 2025, Volume and Issue: 13

Published: March 18, 2025

Neural activity in auditory cortex tracks the amplitude-onset envelope of continuous speech, but recent work counterintuitively suggests that neural tracking increases when speech is masked by background noise, despite reduced intelligibility. Noise-related amplification could indicate stochastic resonance – response facilitation through noise supports tracking, a comprehensive account lacking. In five human electroencephalography experiments, current study demonstrates generalized enhancement due to minimal noise. Results show (1) enhanced for at very high signal-to-noise ratios (~30 dB SNR) where highly intelligible; (2) this independent attention; (3) it generalizes across different stationary maskers, strongest 12-talker babble; and (4) present headphone free-field listening, suggesting neural-tracking real-life listening. The paints clear picture enhances representation onset-envelope, contributes tracking. further highlights non-linearities induced make its use as biological marker processing challenging.

Language: Английский

Citations

0

Reliability and generalizability of neural speech tracking in younger and older adults DOI
Ryan A. Panela,

Francesca Copelli,

Björn Herrmann

et al.

Neurobiology of Aging, Journal Year: 2023, Volume and Issue: 134, P. 165 - 180

Published: Nov. 21, 2023

Language: Английский

Citations

9

Impaired Cortical Tracking of Speech in Children with Developmental Language Disorder DOI Creative Commons
Anni Nora,

Oona Rinkinen,

Hanna Renvall

et al.

Journal of Neuroscience, Journal Year: 2024, Volume and Issue: 44(22), P. e2048232024 - e2048232024

Published: April 8, 2024

In developmental language disorder (DLD), learning to comprehend and express oneself with spoken is impaired, but the reason for this remains unknown. Using millisecond-scale magnetoencephalography recordings combined machine models, we investigated whether possible neural basis of disruption lies in poor cortical tracking speech. The stimuli were common Finnish words (e.g., dog, car, hammer) sounds corresponding meanings dog bark, car engine, hammering). both children DLD (10 boys 7 girls) typically developing (TD) control (14 3 girls), aged 10–15 years, activation was best modeled as time-locked unfolding speech input at ∼100 ms latency between sound activation. Amplitude envelope (amplitude changes) spectrogram (detailed time-varying spectral content) words, not other sounds, very successfully decoded based on brain responses bilateral temporal areas; responses, models could tell ∼75–85% accuracy which two had been presented participant. However, representation amplitude information poorer compared TD longer latencies (at ∼200–300 lag). We interpret effect reflecting retention acoustic–phonetic short-term memory. This impaired potentially affect processing well continuous present results offer an explanation problems comprehension acquisition DLD.

Language: Английский

Citations

3

Editorial: Neural Tracking: Closing the Gap Between Neurophysiology and Translational Medicine DOI Creative Commons
Giovanni M. Di Liberto, Jens Hjortkjær, Nima Mesgarani

et al.

Frontiers in Neuroscience, Journal Year: 2022, Volume and Issue: 16

Published: March 16, 2022

EDITORIAL article Front. Neurosci., 16 March 2022Sec.Auditory Cognitive Neuroscience https://doi.org/10.3389/fnins.2022.872600

Citations

13

MEG Activity in Visual and Auditory Cortices Represents Acoustic Speech-Related Information during Silent Lip Reading DOI Creative Commons
Felix Bröhl, Anne Keitel, Christoph Kayser

et al.

eNeuro, Journal Year: 2022, Volume and Issue: 9(3), P. ENEURO.0209 - 22.2022

Published: May 1, 2022

Abstract Speech is an intrinsically multisensory signal, and seeing the speaker’s lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how brain exploits visual speech for comprehension. Previous work debated whether lip signals are mainly processed along auditory pathways or system directly implements speech-related processes. To probe this, we systematically characterized dynamic representations multiple acoustic speech-derived features source localized MEG recordings that were obtained while participants listened to viewed silent speech. Using mutual-information framework provide comprehensive assessment well temporal occipital cortices reflect physically presented unique aspects absent but may be critical Our results demonstrate both feature functionally specific form restoration: during reading, they unheard features, independent co-existing visible movements. This restoration emphasizes pitch signature cortex envelope predictive lip-reading performance. These findings suggest when lips, engages support comprehension by exploiting correspondences between movements spectro-temporal cues.

Language: Английский

Citations

13

Incorporating models of subcortical processing improves the ability to predict EEG responses to natural speech DOI Creative Commons

Elsa Lindboom,

Aaron Nidiffer, Laurel H. Carney

et al.

Hearing Research, Journal Year: 2023, Volume and Issue: 433, P. 108767 - 108767

Published: April 10, 2023

Language: Английский

Citations

7

EEG-based speaker–listener neural coupling reflects speech-selective attentional mechanisms beyond the speech stimulus DOI
Jiawei Li, Bo Hong, Guido Nolte

et al.

Cerebral Cortex, Journal Year: 2023, Volume and Issue: 33(22), P. 11080 - 11091

Published: Oct. 7, 2023

When we pay attention to someone, do focus only on the sound they make, word use, or form a mental space shared with speaker want to? Some would argue that human language is no other than simple signal, but others claim beings understand each because ground between and listener. Our study aimed explore neural mechanisms of speech-selective by investigating electroencephalogram-based coupling listener in cocktail party paradigm. The temporal response function method was employed reveal how coupled at level. results showed attended peaked 5 s before speech onset delta band over left frontal region, correlated comprehension performance. In contrast, attentional processing acoustics semantics occurred primarily later stage after not significantly These findings suggest predictive mechanism achieve speaker-listener for successful comprehension.

Language: Английский

Citations

7

Cortical Auditory Attention Decoding During Music and Speech Listening DOI Creative Commons
Adèle Simon, Gérard Loquet, Jan Østergaard

et al.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, Journal Year: 2023, Volume and Issue: 31, P. 2903 - 2911

Published: Jan. 1, 2023

It has been demonstrated that from cortical recordings, it is possible to detect which speaker a person attending in cocktail party scenario. The stimulus reconstruction approach, based on linear regression, shown be useable reconstruct an approximation of the envelopes sounds attended and not by listener electroencephalogram data (EEG). Comparing reconstructed with stimuli, higher correlation between sound observed. Most studies focused speech listening, only few investigated performances mechanisms auditory attention decoding during music listening. In present study, detection (AAD) techniques have proven successful for listening were applied situation where actively concomitant distracting sound. Results show AAD can both while showing differences accuracy. results this study also highlighted importance training used construction model. This first attempt decode EEG situations are present. indicate regression when if model trained musical signals.

Language: Английский

Citations

6

Neural tracking of continuous acoustics: properties, speech‐specificity and open questions DOI Creative Commons
Benedikt Zoefel, Anne Kösem

European Journal of Neuroscience, Journal Year: 2023, Volume and Issue: 59(3), P. 394 - 414

Published: Dec. 27, 2023

Abstract Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech inherently dynamic signal, and recent line research focused on neural activity following the temporal structure speech. We review findings that characterise dynamics in processing continuous acoustics allow us compare these with aspects human highlight properties constraints both have, suggesting auditory systems are optimised process then discuss speech‐specificity their potential mechanistic origins summarise open questions field.

Language: Английский

Citations

4

Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music DOI Creative Commons
Alice Vivien Barchet, Molly J. Henry, Claire Pelofi

et al.

Communications Psychology, Journal Year: 2024, Volume and Issue: 2(1)

Published: Jan. 3, 2024

Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant structure. We investigate influence of different motor effectors on rate-specific processing both domains. A perception a synchronization task involving syllable piano tone sequences typically associated with speech (whispering) (finger-tapping) were tested at slow (~2 Hz) fast rates (~4.5 Hz). Although performance was generally better rates, exhibited rate preferences. Finger-tapping advantaged compared whispering but not faster being effector-dependent slow, highly correlated rates. Perception predicted by general finger-tapping component. Our data suggests partially independent for music, possibly differential recruitment cortical circuitry.

Language: Английский

Citations

1