Neural tracking of natural speech: an effective marker for post-stroke aphasia DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

и другие.

Brain Communications, Год журнала: 2025, Номер 7(2)

Опубликована: Янв. 1, 2025

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. Behavioural tests are the current standard to detect but they time-consuming, have limited ecological validity and require active patient cooperation. To address these limitations, we tested potential EEG-based neural envelope tracking natural speech. The technique investigates response temporal speech, which is critical for speech understanding by encompassing cues detecting segmenting linguistic units (e.g. phrases, words phonemes). We recorded EEG 26 individuals with aphasia in chronic phase after stroke (>6 months post-stroke) 22 healthy controls while listened 25-min story. quantified broadband frequency range as well delta, theta, alpha, beta gamma bands using mutual information analyses. Besides group differences measures, also its suitability at individual level support vector machine classifier. further investigated reliability required recording length accurate detection. Our results showed had decreased encoding compared broad, theta bands, aligns assumed role auditory processing Neural effectively captured level, classification accuracy 83.33% an area under curve 89.16%. Moreover, demonstrated high-accuracy detection can be achieved time-efficient (5–7 min) highly reliable manner (split-half correlations between R = 0.61 0.96 across bands). In this study, identified specific characteristics impaired holding promise biomarker condition. Furthermore, demonstrate discriminate high accuracy, manner. findings represent significant advance towards more automated, objective ecologically valid assessments impairments aphasia.

Язык: Английский

Emergence of the cortical encoding of phonetic features in the first year of life DOI Creative Commons
Giovanni M. Di Liberto, Adam Attaheri, Giorgia Cantisani

и другие.

Nature Communications, Год журнала: 2023, Номер 14(1)

Опубликована: Дек. 1, 2023

Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed behavioural investigations, likely rely on increasingly neural underpinnings. The infant brain is known robustly track the envelope, however previous cortical tracking studies were unable demonstrate presence phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses nursery rhymes investigate encoding features in longitudinal cohort when aged 4, 7 and 11 months, as well adults. analyses reveal an detailed acoustically invariant emerging over year life, providing neurophysiological evidence that pre-verbal human cortex learns categories. By contrast, found no credible for age-related increases acoustic spectrogram.

Язык: Английский

Процитировано

17

Eelbrain: A Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

и другие.

bioRxiv (Cold Spring Harbor Laboratory), Год журнала: 2021, Номер unknown

Опубликована: Авг. 3, 2021

1 Abstract Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: 1) Is there significant neural representation corresponding variable? And if so, 2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Язык: Английский

Процитировано

41

The effects of data quantity on performance of temporal response function analyses of natural speech processing DOI Creative Commons
Juraj Mesík, Magdalena Wojtczak

Frontiers in Neuroscience, Год журнала: 2023, Номер 16

Опубликована: Янв. 12, 2023

In recent years, temporal response function (TRF) analyses of neural activity recordings evoked by continuous naturalistic stimuli have become increasingly popular for characterizing properties within the auditory hierarchy. However, despite this rise in TRF usage, relatively few educational resources these tools exist. Here we use a dual-talker speech paradigm to demonstrate how key parameter experimental design, quantity acquired data, influences fit either individual data (subject-specific analyses), or group (generic analyses). We show that although model prediction accuracy increases monotonically with quantity, amount required achieve significant accuracies can vary substantially based on whether fitted contains densely (e.g., acoustic envelope) sparsely lexical surprisal) spaced features, especially when goal is capture aspect responses uniquely explained specific features. Moreover, generic models exhibit high performance small amounts test (2–8 min), if they are trained sufficiently large set. As such, may be particularly useful clinical and multi-task study designs limited recording time. Finally, regularization procedure used fitting interact models, larger training quantities resulting systematically amplitudes. Together, demonstrations work should aid new users analyses, combination other tools, such as piloting power serve detailed reference choosing acquisition duration future studies.

Язык: Английский

Процитировано

14

The integration of continuous audio and visual speech in a cocktail-party environment depends on attention DOI Creative Commons
Farhin Ahmed, Aaron Nidiffer, Aisling E. O’Sullivan

и другие.

NeuroImage, Год журнала: 2023, Номер 274, С. 120143 - 120143

Опубликована: Апрель 29, 2023

In noisy environments, our ability to understand speech benefits greatly from seeing the speaker's face. This is attributed brain's integrate audio and visual information, a process known as multisensory integration. addition, selective attention plays an enormous role in what we understand, so-called cocktail-party phenomenon. But how integration interact remains incompletely understood, particularly case of natural, continuous speech. Here, addressed this issue by analyzing EEG data recorded participants who undertook task using natural To assess integration, modeled responses two ways. The first assumed that audiovisual processing simply linear combination (i.e., A + V model), while second allows for possibility interactions AV model). Applying these models revealed attended were better explained model, providing evidence contrast, unattended best captured suggesting suppressed Follow up analyses some limited early speech, with no occurring at later levels processing. We take findings occurs multiple brain, each which can be differentially affected attention.

Язык: Английский

Процитировано

14

Investigating the attentional focus to workplace-related soundscapes in a complex audio-visual-motor task using EEG DOI Creative Commons
Marc Rosenkranz, Timur Cetin, Verena Uslar

и другие.

Frontiers in Neuroergonomics, Год журнала: 2023, Номер 3

Опубликована: Фев. 2, 2023

Introduction In demanding work situations (e.g., during a surgery), the processing of complex soundscapes varies over time and can be burden for medical personnel. Here we study, using mobile electroencephalography (EEG), how humans process workplace-related while performing audio-visual-motor task (3D Tetris). Specifically, wanted to know attentional focus changes soundscape as whole. Method Participants played game 3D Tetris in which they had use both hands control falling blocks. At same time, participants listened soundscape, similar what is found an operating room (i.e., sound machinery, people talking background, alarm sounds, instructions). this within-subject design, react instructions “place next block upper left corner”) sounds depending on experimental condition, either specific originating from fixed location or beep that originated varying locations. Attention reflected narrow focus, it was easy detect most could ignored. wide required monitor multiple different streams. Results discussion show robustness N1 P3 event related potential response dynamic with auditory soundscape. Furthermore, used temporal functions study whole This step toward studying EEG.

Язык: Английский

Процитировано

13

Beyond linear neural envelope tracking: a mutual information approach DOI
Pieter De Clercq, Jonas Vanthornhout, Maaike Vandermosten

и другие.

Journal of Neural Engineering, Год журнала: 2023, Номер 20(2), С. 026007 - 026007

Опубликована: Фев. 22, 2023

Objective.The human brain tracks the temporal envelope of speech, which contains essential cues for speech understanding. Linear models are most common tool to study neural tracking. However, information on how is processed can be lost since nonlinear relations precluded. Analysis based mutual (MI), other hand, detect both linear and gradually becoming more popular in field Yet, several different approaches calculating MI applied with no consensus approach use. Furthermore, added value techniques remains a subject debate field. The present paper aims resolve these open questions.Approach.We analyzed electroencephalography (EEG) data participants listening continuous analyses models.Main results.Comparing approaches, we conclude that results reliable robust using Gaussian copula approach, first transforms standard Gaussians. With this analysis valid technique studying Like models, it allows spatial interpretations processing, peak latency analyses, applications multiple EEG channels combined. In final analysis, tested whether components were response by removing all data. We robustly detected single-subject level analysis.Significance.We demonstrate processes way. Unlike detects such relations, proving its addition, retains characteristics an advantage when complex (nonlinear) deep networks.

Язык: Английский

Процитировано

13

Detecting post-stroke aphasia using EEG-based neural envelope tracking of natural speech DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

и другие.

medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2023, Номер unknown

Опубликована: Март 17, 2023

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. The standard behavioral tests used to diagnose aphasia are time-consuming, require subjective interpretation, and have low ecological validity. As consequence, comorbid cognitive problems present in individuals with (IWA) can bias test results, generating discrepancy between outcomes everyday-life abilities. Neural tracking the speech envelope is promising tool for investigating brain responses natural speech. crucial understanding, encompassing cues detecting segmenting linguistic units, e.g., phrases, words phonemes. In this study, we aimed potential neural technique impairments IWA. We recorded EEG 27 IWA chronic phase after stroke 22 healthy controls while they listened 25-minute story. quantified broadband frequency range as well delta, theta, alpha, beta, gamma bands using mutual information analysis. Besides group differences measures, also tested its suitability at individual level Support Vector Machine (SVM) classifier. further investigated required recording length SVM detect obtain reliable outcomes. displayed decreased compared broad, band, which line assumed role these auditory pro-cessing effectively captured level, an accuracy 84% area under curve 88%. Moreover, demonstrated high-accuracy detection be achieved time-efficient (5 minutes) highly manner (split-half reliability correlations R=0.62 R=0.96 across bands). Our study shows effective biomarker post-stroke aphasia. diagnostic high reliability, individual-level assessment. This work represents significant step towards more automatic, objective, ecologically valid assessments

Язык: Английский

Процитировано

12

Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field DOI Creative Commons
Florine L. Bachmann, Joshua P. Kulasingham, Kasper Eskelund

и другие.

Trends in Hearing, Год журнала: 2024, Номер 28

Опубликована: Янв. 1, 2024

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, continuous speech presented via earphones have been recently using linear temporal functions (TRFs). Here, we extend earlier studies measuring subcortical in the sound-field, and assess amount data needed estimate TRFs. Electroencephalography (EEG) was recorded from 24 normal participants while they listened clicks stories loudspeakers. Subcortical TRFs were computed after accounting non-linear processing periphery either stimulus rectification or an nerve model. Our results demonstrated that could be reliably measured sound-field. estimated models outperformed simple rectification, 16 minutes sufficient all show clear wave V peaks both sound-field highly consistent earphone conditions, with click ABRs. However, required slightly more (16 minutes) achieve compared (12 minutes), possibly due effects room acoustics. By investigating this study lays groundwork bringing assessment closer real-life may lead improved evaluations smart technologies.

Язык: Английский

Процитировано

4

Algorithms for Estimating Time-Locked Neural Response Components in Cortical Processing of Continuous Speech DOI
Joshua P. Kulasingham, Jonathan Z. Simon

IEEE Transactions on Biomedical Engineering, Год журнала: 2022, Номер 70(1), С. 88 - 96

Опубликована: Июнь 21, 2022

The Temporal Response Function (TRF) is a linear model of neural activity time-locked to continuous stimuli, including speech. TRFs based on speech envelopes typically have distinct components that provided remarkable insights into the cortical processing However, current methods may lead less than reliable estimates single-subject TRF components. Here, we compare two established methods, in component estimation, and also propose novel algorithms utilize prior knowledge these components, bypassing full estimation.

Язык: Английский

Процитировано

17

A representation of abstract linguistic categories in the visual system underlies successful lipreading DOI Creative Commons
Aaron Nidiffer, Cody Zhewei Cao, Aisling E. O’Sullivan

и другие.

NeuroImage, Год журнала: 2023, Номер 282, С. 120391 - 120391

Опубликована: Сен. 25, 2023

There is considerable debate over how visual speech processed in the absence of sound and whether neural activity supporting lipreading occurs brain areas. Much ambiguity stems from a lack behavioral grounding neurophysiological analyses that cannot disentangle high-level linguistic phonetic/energetic contributions speech. To address this, we recorded EEG human observers as they watched silent videos, half which were novel previously rehearsed with accompanying audio. We modeled responses to reflected processing low-level features (motion, lip movements) higher-level categorical representation units, known visemes. The ability these visemes account for – beyond motion movements was significantly enhanced videos way correlated participants' trial-by-trial lipread Source localization viseme showed clear cortex, no strong evidence involvement auditory interpret this support idea system produces its own specialized 1) well-described by features, 2) dissociable movements, 3) predictive ability. also suggest reinterpretation previous findings cortical activation during consistent hierarchical accounts audiovisual perception.

Язык: Английский

Процитировано

10