Neural tracking of natural speech: an effective marker for post-stroke aphasia DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

et al.

Brain Communications, Journal Year: 2025, Volume and Issue: 7(2)

Published: Jan. 1, 2025

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. Behavioural tests are the current standard to detect but they time-consuming, have limited ecological validity and require active patient cooperation. To address these limitations, we tested potential EEG-based neural envelope tracking natural speech. The technique investigates response temporal speech, which is critical for speech understanding by encompassing cues detecting segmenting linguistic units (e.g. phrases, words phonemes). We recorded EEG 26 individuals with aphasia in chronic phase after stroke (>6 months post-stroke) 22 healthy controls while listened 25-min story. quantified broadband frequency range as well delta, theta, alpha, beta gamma bands using mutual information analyses. Besides group differences measures, also its suitability at individual level support vector machine classifier. further investigated reliability required recording length accurate detection. Our results showed had decreased encoding compared broad, theta bands, aligns assumed role auditory processing Neural effectively captured level, classification accuracy 83.33% an area under curve 89.16%. Moreover, demonstrated high-accuracy detection can be achieved time-efficient (5–7 min) highly reliable manner (split-half correlations between R = 0.61 0.96 across bands). In this study, identified specific characteristics impaired holding promise biomarker condition. Furthermore, demonstrate discriminate high accuracy, manner. findings represent significant advance towards more automated, objective ecologically valid assessments impairments aphasia.

Language: Английский

Predictors for estimating subcortical EEG responses to continuous speech DOI Creative Commons
Joshua P. Kulasingham, Florine L. Bachmann, Kasper Eskelund

et al.

PLoS ONE, Journal Year: 2024, Volume and Issue: 19(2), P. e0297826 - e0297826

Published: Feb. 8, 2024

Perception of sounds and speech involves structures in the auditory brainstem that rapidly process ongoing stimuli. The role these processing can be investigated by measuring their electrical activity using scalp-mounted electrodes. However, typical analysis methods involve averaging neural responses to many short repetitive stimuli bear little relevance daily listening environments. Recently, subcortical more ecologically relevant continuous were detected linear encoding models. These estimate temporal response function (TRF), which is a regression model minimises error between measured signal predictor derived from stimulus. Using predictors highly non-linear peripheral system may improve TRF estimation accuracy peak detection. Here, we compare both simple complex models for estimating TRFs on electroencephalography (EEG) data 24 participants speech. We also investigate length required TRFs, find around 12 minutes sufficient clear wave V peaks (>3 dB SNR) seen nearly all participants. Interestingly, filterbank-based yield SNRs are not significantly different those estimated nerve, provided nonlinear effects adaptation appropriately modelled. Crucially, computing simpler than 50 times faster compared model. This work paves way efficient modelling detection speech, lead improved diagnosis metrics hearing impairment assistive technology.

Language: Английский

Citations

21

Neural tracking as a diagnostic tool to assess the auditory pathway DOI
Marlies Gillis, Jana Van Canneyt, Tom Francart

et al.

Hearing Research, Journal Year: 2022, Volume and Issue: 426, P. 108607 - 108607

Published: Sept. 14, 2022

Language: Английский

Citations

44

Relating EEG to continuous speech using deep neural networks: a review DOI
Corentin Puffay, Bernd Accou, Lies Bollens

et al.

Journal of Neural Engineering, Journal Year: 2023, Volume and Issue: 20(4), P. 041003 - 041003

Published: July 13, 2023

Abstract Objective. When a person listens to continuous speech, corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used relate EEG recording speech signal. The ability of linear find mapping between these two signals as measure neural tracking speech. Such limited they assume linearity EEG-speech relationship, which omits nonlinear dynamics brain. As an alternative, deep learning have recently been Approach. This paper reviews comments on deep-learning-based studies that single- or multiple-speakers paradigms. We point out recurrent methodological pitfalls need for standard benchmark model analysis. Main results. gathered 29 studies. main issues we found biased cross-validations, data leakage leading over-fitted models, disproportionate size compared model’s complexity. In addition, address requirements analysis, such public datasets, common evaluation metrics, good practices match-mismatch task. Significance. present review summarizing while addressing important considerations this newly expanding field. Our study particularly relevant given growing application decoding.

Language: Английский

Citations

42

Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience DOI Creative Commons
Kristin Weineck,

Olivia Xin Wen,

Molly J. Henry

et al.

eLife, Journal Year: 2022, Volume and Issue: 11

Published: Sept. 12, 2022

Neural activity in the auditory system synchronizes to sound rhythms, and brain-environment synchronization is thought be fundamental successful perception. Sound rhythms are often operationalized terms of sound's amplitude envelope. We hypothesized that - especially for music envelope might not best capture complex spectro-temporal fluctuations give rise beat perception synchronized neural activity. This study investigated (1) different musical features, (2) tempo-dependence synchronization, (3) dependence on familiarity, enjoyment, ease In this electroencephalography study, 37 human participants listened tempo-modulated (1-4 Hz). Independent whether analysis approach was based temporal response functions (TRFs) or reliable components (RCA), spectral flux as opposed evoked strongest synchronization. Moreover, with slower rates, high easy-to-perceive beats elicited response. Our results demonstrate importance driving highlight its sensitivity tempo, salience.

Language: Английский

Citations

41

Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

et al.

eLife, Journal Year: 2023, Volume and Issue: 12

Published: Nov. 29, 2023

Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group-level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: (1) Is there significant neural representation corresponding variable? And if so, (2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Language: Английский

Citations

40

Subcortical responses to music and speech are alike while cortical responses diverge DOI Creative Commons
Shan Tong, Madeline S. Cappelloni, Ross K. Maddox

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Jan. 8, 2024

Music and speech are encountered daily unique to human beings. Both transformed by the auditory pathway from an initial acoustical encoding higher level cognition. Studies of cortex have revealed distinct brain responses music speech, but differences may emerge in or be inherited different subcortical encoding. In first part this study, we derived brainstem response (ABR), a measure encoding, recorded using two analysis methods. The method, described previously acoustically based, yielded very ABRs between sound classes. second however, developed here based on physiological model periphery, gave highly correlated speech. We determined superiority method through several metrics, suggesting there is no appreciable impact stimulus class (i.e., vs speech) way acoustics encoded subcortically. study's part, considered cortex. Our new resulted cortical becoming more similar with remaining differences. results taken together suggest that evidence for stimulus-class dependent processing at not level.

Language: Английский

Citations

12

SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants DOI Creative Commons
Bernd Accou, Lies Bollens, Marlies Gillis

et al.

Data, Journal Year: 2024, Volume and Issue: 9(8), P. 94 - 94

Published: July 26, 2024

Researchers investigating the neural mechanisms underlying speech perception often employ electroencephalography (EEG) to record brain activity while participants listen spoken language. The high temporal resolution of EEG enables study responses fast and dynamic signals. Previous studies have successfully extracted characteristics from data and, conversely, predicted features. Machine learning techniques are generally employed construct encoding decoding models, which necessitate a substantial quantity data. We present SparrKULee, Speech-evoked Auditory Repository data, measured at KU Leuven, comprising 64-channel recordings 85 young individuals with normal hearing, each whom listened 90–150 min natural speech. This dataset is more extensive than any currently available in terms both number per participant. It suitable for training larger machine models. evaluate using linear state-of-the-art non-linear models encoding/decoding match/mismatch paradigm, providing benchmark scores future research.

Language: Английский

Citations

10

Attention, musicality, and familiarity shape cortical speech tracking at the musical cocktail party DOI
Jane A. Brown, Gavin M. Bidelman

Brain and Language, Journal Year: 2025, Volume and Issue: 266, P. 105581 - 105581

Published: April 25, 2025

Language: Английский

Citations

1

The effects of speech masking on neural tracking of acoustic and semantic features of natural speech DOI

Sonia Yasmin,

Vanessa C. Irsik,

Ingrid S. Johnsrude

et al.

Neuropsychologia, Journal Year: 2023, Volume and Issue: 186, P. 108584 - 108584

Published: May 9, 2023

Language: Английский

Citations

17

Emergence of the cortical encoding of phonetic features in the first year of life DOI Creative Commons
Giovanni M. Di Liberto, Adam Attaheri, Giorgia Cantisani

et al.

Nature Communications, Journal Year: 2023, Volume and Issue: 14(1)

Published: Dec. 1, 2023

Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed behavioural investigations, likely rely on increasingly neural underpinnings. The infant brain is known robustly track the envelope, however previous cortical tracking studies were unable demonstrate presence phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses nursery rhymes investigate encoding features in longitudinal cohort when aged 4, 7 and 11 months, as well adults. analyses reveal an detailed acoustically invariant emerging over year life, providing neurophysiological evidence that pre-verbal human cortex learns categories. By contrast, found no credible for age-related increases acoustic spectrogram.

Language: Английский

Citations

17