A listening advantage for native speech is reflected by attention-related activity in auditory cortex DOI Creative Commons

Meng Liang,

Johannes Gerwien, Alexander Gutschalk

et al.

Communications Biology, Journal Year: 2025, Volume and Issue: 8(1)

Published: Feb. 5, 2025

Language: Английский

Linear Modeling of Neurophysiological Responses to Speech and Other Continuous Stimuli: Methodological Considerations for Applied Research DOI Creative Commons
Michael J. Crosse, Nathaniel J. Zuk, Giovanni M. Di Liberto

et al.

Frontiers in Neuroscience, Journal Year: 2021, Volume and Issue: 15

Published: Nov. 22, 2021

Cognitive neuroscience, in particular research on speech and language, has seen an increase the use of linear modeling techniques for studying processing natural, environmental stimuli. The availability such computational tools prompted similar investigations many clinical domains, facilitating study cognitive sensory deficits under more naturalistic conditions. However, (and often highly heterogeneous) cohorts introduces added layer complexity to procedures, potentially leading instability and, as a result, inconsistent findings. Here, we outline some key methodological considerations applied research, referring hypothetical experiment involving worked examples simulated electrophysiological (EEG) data. In particular, focus experimental design, data preprocessing, stimulus feature extraction, model training evaluation, interpretation weights. Throughout paper, demonstrate implementation each step MATLAB using mTRF-Toolbox discuss how address issues that could arise research. doing so, hope provide better intuition these technical points resource researchers investigating ecologically rich

Language: Английский

Citations

118

Parallel processing in speech perception with local and global representations of linguistic context DOI Creative Commons
Christian Brodbeck, Shohini Bhattasali,

Aura AL Cruz Heredia

et al.

eLife, Journal Year: 2022, Volume and Issue: 11

Published: Jan. 21, 2022

Speech processing is highly incremental. It widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive integrated with bottom-up sensory input: Classic psycholinguistic paradigms suggest two-stage process, in which acoustic input initially leads local, context-independent representations, are then quickly contextual constraints. This contrasts view brain constructs single coherent, unified interpretation input, fully integrates available information across representational hierarchies, thus uses constraints modulate even earliest representations. To distinguish these hypotheses, we tested magnetoencephalography responses continuous narrative speech for signatures local models. Results provide employ both types parallel. Two uniquely predict some part early neural responses, one based on sublexical phoneme sequences, phonemes current word alone; at same time, also reflect model incorporates sentence-level Neural source localization places anatomical origins different nonidentical parts superior temporal lobes bilaterally, right hemisphere showing relative preference more These results recruits parallel, reconciling disparate findings. Parallel might make perceptual system robust, facilitate unexpected inputs, serve function language acquisition.

Language: Английский

Citations

67

Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

et al.

eLife, Journal Year: 2023, Volume and Issue: 12

Published: Nov. 29, 2023

Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group-level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: (1) Is there significant neural representation corresponding variable? And if so, (2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Language: Английский

Citations

40

Hearing loss is associated with delayed neural responses to continuous speech DOI
Marlies Gillis, Lien Decruy, Jonas Vanthornhout

et al.

European Journal of Neuroscience, Journal Year: 2022, Volume and Issue: 55(6), P. 1671 - 1690

Published: March 1, 2022

We investigated the impact of hearing loss on neural processing speech. Using a forward modelling approach, we compared responses to continuous speech 14 adults with sensorineural those age-matched normal-hearing peers. Compared their peers, hearing-impaired listeners had increased tracking and delayed in quiet. The latency also degree loss. As understanding decreased, decreased both populations; however, significantly different trend was observed for responses. For listeners, increasing background noise level. However, this increase not observed. Our results support idea that response indicates efficiency processing: More or brain regions are involved speech, which causes longer communication pathways brain. These hamper information integration among these regions, reflected times. Altogether, suggests HI as more time required process suggest reduction occurs gradually deteriorates. From our results, it is apparent sound amplification does solve Even when listening silence at comfortable loudness, less efficiently.

Language: Английский

Citations

39

Linguistic processing of task-irrelevant speech at a cocktail party DOI Creative Commons
Paz Har-shai Yahav, Elana Zion Golumbic

eLife, Journal Year: 2021, Volume and Issue: 10

Published: May 4, 2021

Paying attention to one speaker in a noisy place can be extremely difficult, because to-be-attended and task-irrelevant speech compete for processing resources. We tested whether this competition is restricted acoustic-phonetic interference or if it extends linguistic as well. Neural activity was recorded using Magnetoencephalography human participants were instructed attend natural presented ear, stimuli the other. Task-irrelevant consisted either of random sequences syllables, syllables structured form coherent sentences, hierarchical frequency-tagging. find that phrasal structure represented neural response left inferior frontal posterior parietal regions, indicating selective does not fully eliminate speech. Additionally, tracking regions enhanced when competing with stimuli, suggesting inherent between them processing.

Language: Английский

Citations

54

The effects of speech masking on neural tracking of acoustic and semantic features of natural speech DOI

Sonia Yasmin,

Vanessa C. Irsik,

Ingrid S. Johnsrude

et al.

Neuropsychologia, Journal Year: 2023, Volume and Issue: 186, P. 108584 - 108584

Published: May 9, 2023

Language: Английский

Citations

17

Eye movements track prioritized auditory features in selective attention to natural speech DOI Creative Commons
Quirin Gehmacher, Juliane Schubert, Fabian Schmidt

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: May 1, 2024

Abstract Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with oculomotor processes suggests joint network attention and eye movements. However, role this shared in complex, naturalistic environments remains understudied. Here, we investigated movements relation to (un)attended sentences natural speech. Combining simultaneously recorded tracking magnetoencephalographic data temporal response functions, show gaze tracks attended speech, phenomenon termed ocular speech tracking. Ocular even differentiates target from distractor multi-speaker context is further related intelligibility. Moreover, provide evidence its contribution neural differences processing, emphasizing necessity consider activity future research interpretation auditory cognition.

Language: Английский

Citations

8

Eelbrain: A Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2021, Volume and Issue: unknown

Published: Aug. 3, 2021

1 Abstract Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: 1) Is there significant neural representation corresponding variable? And if so, 2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Language: Английский

Citations

40

Neural Markers of Speech Comprehension: Measuring EEG Tracking of Linguistic Speech Representations, Controlling the Speech Acoustics DOI Creative Commons
Marlies Gillis, Jonas Vanthornhout, Jonathan Z. Simon

et al.

Journal of Neuroscience, Journal Year: 2021, Volume and Issue: 41(50), P. 10316 - 10329

Published: Nov. 3, 2021

When listening to speech, our brain responses time lock acoustic events in the stimulus. Recent studies have also reported that cortical track linguistic representations of speech. However, tracking these is often described without controlling for properties. Therefore, response might reflect unaccounted processing rather than language processing. Here, we evaluated potential several recently proposed as neural markers speech comprehension. To do so, investigated EEG audiobook 29 participants (22 females). We examined whether contribute unique information over and beyond each other. Indeed, not all were significantly tracked after phoneme surprisal, cohort entropy, word frequency tested generality associated by training on one story testing another. In general, are similarly across different stories spoken readers. These results suggests characterize content speech.SIGNIFICANCE STATEMENT For clinical applications, it would be desirable develop a marker comprehension derived from continuous Such measure allow behavior-free evaluation understanding; this open doors toward better quantification understanding populations whom obtaining behavioral measures may difficult, such young children or people with cognitive impairments, targeted interventions fitting hearing devices.

Language: Английский

Citations

35

Multivariate analysis of speech envelope tracking reveals coupling beyond auditory cortex DOI Creative Commons

Nikos Chalas,

Christoph Daube, Daniel S. Kluger

et al.

NeuroImage, Journal Year: 2022, Volume and Issue: 258, P. 119395 - 119395

Published: June 16, 2022

The systematic alignment of low-frequency brain oscillations with the acoustic speech envelope signal is well established and has been proposed to be crucial for actively perceiving speech. Previous studies investigating speech-brain coupling in source space are restricted univariate pairwise approaches between signals, therefore tracking information frequency-specific communication channels might lacking. To address this, we propose a novel multivariate framework estimating where neural variability from source-derived activity taken into account along rate envelope's amplitude change (derivative). We applied it magnetoencephalographic (MEG) recordings while human participants (male female) listened one hour continuous naturalistic speech, showing that approach outperforms corresponding method low- high frequencies across frontal, motor, temporal areas. Systematic comparisons revealed gain low (0.6 - 0.8 Hz) was related whereas higher (from 10 mostly increased cortical Furthermore, following non-negative matrix factorization found distinct components time processing. confirm operates mainly two timescales (δ θ frequency bands) extend those findings shorter delays auditory-related longer higher-association frontal motor components, indicating differences providing implications hierarchical stimulus-driven

Language: Английский

Citations

28