Neural tracking of natural speech: an effective marker for post-stroke aphasia DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

et al.

Brain Communications, Journal Year: 2025, Volume and Issue: 7(2)

Published: Jan. 1, 2025

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. Behavioural tests are the current standard to detect but they time-consuming, have limited ecological validity and require active patient cooperation. To address these limitations, we tested potential EEG-based neural envelope tracking natural speech. The technique investigates response temporal speech, which is critical for speech understanding by encompassing cues detecting segmenting linguistic units (e.g. phrases, words phonemes). We recorded EEG 26 individuals with aphasia in chronic phase after stroke (>6 months post-stroke) 22 healthy controls while listened 25-min story. quantified broadband frequency range as well delta, theta, alpha, beta gamma bands using mutual information analyses. Besides group differences measures, also its suitability at individual level support vector machine classifier. further investigated reliability required recording length accurate detection. Our results showed had decreased encoding compared broad, theta bands, aligns assumed role auditory processing Neural effectively captured level, classification accuracy 83.33% an area under curve 89.16%. Moreover, demonstrated high-accuracy detection can be achieved time-efficient (5–7 min) highly reliable manner (split-half correlations between R = 0.61 0.96 across bands). In this study, identified specific characteristics impaired holding promise biomarker condition. Furthermore, demonstrate discriminate high accuracy, manner. findings represent significant advance towards more automated, objective ecologically valid assessments impairments aphasia.

Language: Английский

Eye movements track prioritized auditory features in selective attention to natural speech DOI Creative Commons
Quirin Gehmacher, Juliane Schubert, Fabian Schmidt

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: May 1, 2024

Abstract Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with oculomotor processes suggests joint network attention and eye movements. However, role this shared in complex, naturalistic environments remains understudied. Here, we investigated movements relation to (un)attended sentences natural speech. Combining simultaneously recorded tracking magnetoencephalographic data temporal response functions, show gaze tracks attended speech, phenomenon termed ocular speech tracking. Ocular even differentiates target from distractor multi-speaker context is further related intelligibility. Moreover, provide evidence its contribution neural differences processing, emphasizing necessity consider activity future research interpretation auditory cognition.

Language: Английский

Citations

8

Eelbrain: A Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2021, Volume and Issue: unknown

Published: Aug. 3, 2021

1 Abstract Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: 1) Is there significant neural representation corresponding variable? And if so, 2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Language: Английский

Citations

40

The effects of data quantity on performance of temporal response function analyses of natural speech processing DOI Creative Commons
Juraj Mesík, Magdalena Wojtczak

Frontiers in Neuroscience, Journal Year: 2023, Volume and Issue: 16

Published: Jan. 12, 2023

In recent years, temporal response function (TRF) analyses of neural activity recordings evoked by continuous naturalistic stimuli have become increasingly popular for characterizing properties within the auditory hierarchy. However, despite this rise in TRF usage, relatively few educational resources these tools exist. Here we use a dual-talker speech paradigm to demonstrate how key parameter experimental design, quantity acquired data, influences fit either individual data (subject-specific analyses), or group (generic analyses). We show that although model prediction accuracy increases monotonically with quantity, amount required achieve significant accuracies can vary substantially based on whether fitted contains densely (e.g., acoustic envelope) sparsely lexical surprisal) spaced features, especially when goal is capture aspect responses uniquely explained specific features. Moreover, generic models exhibit high performance small amounts test (2–8 min), if they are trained sufficiently large set. As such, may be particularly useful clinical and multi-task study designs limited recording time. Finally, regularization procedure used fitting interact models, larger training quantities resulting systematically amplitudes. Together, demonstrations work should aid new users analyses, combination other tools, such as piloting power serve detailed reference choosing acquisition duration future studies.

Language: Английский

Citations

14

Investigating the attentional focus to workplace-related soundscapes in a complex audio-visual-motor task using EEG DOI Creative Commons
Marc Rosenkranz, Timur Cetin, Verena Uslar

et al.

Frontiers in Neuroergonomics, Journal Year: 2023, Volume and Issue: 3

Published: Feb. 2, 2023

Introduction In demanding work situations (e.g., during a surgery), the processing of complex soundscapes varies over time and can be burden for medical personnel. Here we study, using mobile electroencephalography (EEG), how humans process workplace-related while performing audio-visual-motor task (3D Tetris). Specifically, wanted to know attentional focus changes soundscape as whole. Method Participants played game 3D Tetris in which they had use both hands control falling blocks. At same time, participants listened soundscape, similar what is found an operating room (i.e., sound machinery, people talking background, alarm sounds, instructions). this within-subject design, react instructions “place next block upper left corner”) sounds depending on experimental condition, either specific originating from fixed location or beep that originated varying locations. Attention reflected narrow focus, it was easy detect most could ignored. wide required monitor multiple different streams. Results discussion show robustness N1 P3 event related potential response dynamic with auditory soundscape. Furthermore, used temporal functions study whole This step toward studying EEG.

Language: Английский

Citations

13

Beyond linear neural envelope tracking: a mutual information approach DOI
Pieter De Clercq, Jonas Vanthornhout, Maaike Vandermosten

et al.

Journal of Neural Engineering, Journal Year: 2023, Volume and Issue: 20(2), P. 026007 - 026007

Published: Feb. 22, 2023

Objective.The human brain tracks the temporal envelope of speech, which contains essential cues for speech understanding. Linear models are most common tool to study neural tracking. However, information on how is processed can be lost since nonlinear relations precluded. Analysis based mutual (MI), other hand, detect both linear and gradually becoming more popular in field Yet, several different approaches calculating MI applied with no consensus approach use. Furthermore, added value techniques remains a subject debate field. The present paper aims resolve these open questions.Approach.We analyzed electroencephalography (EEG) data participants listening continuous analyses models.Main results.Comparing approaches, we conclude that results reliable robust using Gaussian copula approach, first transforms standard Gaussians. With this analysis valid technique studying Like models, it allows spatial interpretations processing, peak latency analyses, applications multiple EEG channels combined. In final analysis, tested whether components were response by removing all data. We robustly detected single-subject level analysis.Significance.We demonstrate processes way. Unlike detects such relations, proving its addition, retains characteristics an advantage when complex (nonlinear) deep networks.

Language: Английский

Citations

13

The integration of continuous audio and visual speech in a cocktail-party environment depends on attention DOI Creative Commons
Farhin Ahmed, Aaron Nidiffer, Aisling E. O’Sullivan

et al.

NeuroImage, Journal Year: 2023, Volume and Issue: 274, P. 120143 - 120143

Published: April 29, 2023

In noisy environments, our ability to understand speech benefits greatly from seeing the speaker's face. This is attributed brain's integrate audio and visual information, a process known as multisensory integration. addition, selective attention plays an enormous role in what we understand, so-called cocktail-party phenomenon. But how integration interact remains incompletely understood, particularly case of natural, continuous speech. Here, addressed this issue by analyzing EEG data recorded participants who undertook task using natural To assess integration, modeled responses two ways. The first assumed that audiovisual processing simply linear combination (i.e., A + V model), while second allows for possibility interactions AV model). Applying these models revealed attended were better explained model, providing evidence contrast, unattended best captured suggesting suppressed Follow up analyses some limited early speech, with no occurring at later levels processing. We take findings occurs multiple brain, each which can be differentially affected attention.

Language: Английский

Citations

13

Detecting post-stroke aphasia using EEG-based neural envelope tracking of natural speech DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

et al.

medRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: March 17, 2023

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. The standard behavioral tests used to diagnose aphasia are time-consuming, require subjective interpretation, and have low ecological validity. As consequence, comorbid cognitive problems present in individuals with (IWA) can bias test results, generating discrepancy between outcomes everyday-life abilities. Neural tracking the speech envelope is promising tool for investigating brain responses natural speech. crucial understanding, encompassing cues detecting segmenting linguistic units, e.g., phrases, words phonemes. In this study, we aimed potential neural technique impairments IWA. We recorded EEG 27 IWA chronic phase after stroke 22 healthy controls while they listened 25-minute story. quantified broadband frequency range as well delta, theta, alpha, beta, gamma bands using mutual information analysis. Besides group differences measures, also tested its suitability at individual level Support Vector Machine (SVM) classifier. further investigated required recording length SVM detect obtain reliable outcomes. displayed decreased compared broad, band, which line assumed role these auditory pro-cessing effectively captured level, an accuracy 84% area under curve 88%. Moreover, demonstrated high-accuracy detection be achieved time-efficient (5 minutes) highly manner (split-half reliability correlations R=0.62 R=0.96 across bands). Our study shows effective biomarker post-stroke aphasia. diagnostic high reliability, individual-level assessment. This work represents significant step towards more automatic, objective, ecologically valid assessments

Language: Английский

Citations

12

Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field DOI Creative Commons
Florine L. Bachmann, Joshua P. Kulasingham, Kasper Eskelund

et al.

Trends in Hearing, Journal Year: 2024, Volume and Issue: 28

Published: Jan. 1, 2024

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, continuous speech presented via earphones have been recently using linear temporal functions (TRFs). Here, we extend earlier studies measuring subcortical in the sound-field, and assess amount data needed estimate TRFs. Electroencephalography (EEG) was recorded from 24 normal participants while they listened clicks stories loudspeakers. Subcortical TRFs were computed after accounting non-linear processing periphery either stimulus rectification or an nerve model. Our results demonstrated that could be reliably measured sound-field. estimated models outperformed simple rectification, 16 minutes sufficient all show clear wave V peaks both sound-field highly consistent earphone conditions, with click ABRs. However, required slightly more (16 minutes) achieve compared (12 minutes), possibly due effects room acoustics. By investigating this study lays groundwork bringing assessment closer real-life may lead improved evaluations smart technologies.

Language: Английский

Citations

4

Exploring Relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes DOI Creative Commons
Thorge Haupt, Marc Rosenkranz, Martin G. Bleichner

et al.

eNeuro, Journal Year: 2025, Volume and Issue: unknown, P. ENEURO.0287 - 24.2024

Published: Jan. 3, 2025

A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition information about environment. While extensive research has been dedicated to speech perception, complexities auditory within environments, specifically types and key features extract, remain less explored. Our study aims systematically investigate relevance different feature categories: discrete sound-identity markers, general cognitive state information, acoustic representations, including onset, envelope, mel-spectrogram. Using continuous data analysis, we contrast in terms their predictive power for unseen thus distinct contributions explaining neural data. For this, analyse from a complex audio-visual motor task naturalistic soundscape. The results demonstrated that sets explain most variability were combination highly detailed description specific onsets. Furthermore, it showed established applied soundscapes. Crucially, outcome hinged on excluding periods devoid onsets case features. highlights importance comprehensively describe soundscape, nonacoustic aspects, fully understand dynamics situations. This approach serve as foundation future studies aiming natural settings. Significance Statement is an important step our broader endeavor, which life. Although conducted stationary setting, this provides foundational insights into necessary environmental obtain responses. We delved various features, labeling, goal refining models related perception. findings particularly highlight need thorough considerations across contexts, laboratory settings mobile EEG technologies, paves way investigations more advancing field neuroscience.

Language: Английский

Citations

0

Resilience and vulnerability of neural speech tracking after hearing restoration DOI Creative Commons
Alessandra Federici, Marta Fantoni, Francesco Pavani

et al.

Communications Biology, Journal Year: 2025, Volume and Issue: 8(1)

Published: March 1, 2025

The role of early auditory experience in the development neural speech tracking remains an open question. To address this issue, we measured children with or without functional hearing during their first year life after was restored cochlear implants (CIs), as well controls (HC). Neural CIs is unaffected by absence perinatal experience. CI users and HC exhibit a similar magnitude at short timescales brain activity. However, delayed users, its timing depends on age restoration. Conversely, longer timescales, dampened participants using CIs, thereby accounting for comprehension deficits. These findings highlight resilience sensory processing while also demonstrating vulnerability higher-level to lack shows that phase loss affects differently. Tracking present but weaker ones, impacting comprehension.

Language: Английский

Citations

0