EEG reveals brain network alterations in chronic aphasia during natural speech listening DOI Creative Commons
Ramtin Mehraram, Jill Kries, Pieter De Clercq

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Янв. 19, 2025

Aphasia is a common consequence of stroke which affects language processing. In search an objective biomarker for aphasia, we used EEG to investigate how functional network patterns in the cortex are affected persons with post-stroke chronic aphasia (PWA) compared healthy controls (HC) while they listening story. was recorded from 22 HC and 27 PWA listened 25-min-long Functional connectivity between scalp regions measured weighted phase lag index. The Network-Based Statistics toolbox detect altered correlations behavioural tests within group. Differences geometry were assessed by means graph theory targeted node-attack approach. Group-classification accuracy obtained support vector machine classifier. showed stronger inter-hemispheric theta-band (4.5–7 Hz), whilst weaker subnetwork emerged low-gamma band (30.5–49 Hz). Two subnetworks correlated semantic fluency respectively delta- (1–4 Hz) low-gamma-bands. network, alterations at both local global level, only changes found low-gamma-band network. Network metrics discriminated AUC = 83%. Overall, demonstrate potential EEG-network development informative biomarkers assess natural speech processing aphasia. We hypothesize that detected reflect compensatory mechanisms associated recovery.

Язык: Английский

Neural tracking as a diagnostic tool to assess the auditory pathway DOI
Marlies Gillis, Jana Van Canneyt, Tom Francart

и другие.

Hearing Research, Год журнала: 2022, Номер 426, С. 108607 - 108607

Опубликована: Сен. 14, 2022

Язык: Английский

Процитировано

44

Speech Understanding Oppositely Affects Acoustic and Linguistic Neural Tracking in a Speech Rate Manipulation Paradigm DOI Open Access
Eline Verschueren, Marlies Gillis, Lien Decruy

и другие.

Journal of Neuroscience, Год журнала: 2022, Номер 42(39), С. 7442 - 7453

Опубликована: Авг. 30, 2022

When listening to continuous speech, the human brain can track features of presented speech signal. It has been shown that neural tracking acoustic is a prerequisite for understanding and predict in controlled circumstances. However, also tracks linguistic which may be more directly related understanding. We investigated processing as function varying by manipulating rate. In this paradigm, affected simultaneously but opposite directions: rate increases, information per second present. contrast, becomes challenging when less intelligible at higher rates. measured EEG 18 participants (4 male) who listened various As expected confirmed behavioral results, decreased with increasing Accordingly, rate, increased. This indicates representations capture gradual effect decreasing addition, increased does not necessarily imply better suggests that, although measure because low signal-to-noise ratio, direct predictor SIGNIFICANCE STATEMENT An increasingly popular method investigate tracking. Although much research done on how features, have received attention. study, we disentangled characteristics via A proper way objectively measuring auditory language paves toward clinical applications: objective would allow behavioral-free evaluation understanding, allows evaluate hearing loss adjust aids based responses. benefit populations from whom obtaining measures complex, such young children or people cognitive impairments.

Язык: Английский

Процитировано

40

Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

и другие.

eLife, Год журнала: 2023, Номер 12

Опубликована: Ноя. 29, 2023

Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group-level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: (1) Is there significant neural representation corresponding variable? And if so, (2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Язык: Английский

Процитировано

40

Hearing loss is associated with delayed neural responses to continuous speech DOI
Marlies Gillis, Lien Decruy, Jonas Vanthornhout

и другие.

European Journal of Neuroscience, Год журнала: 2022, Номер 55(6), С. 1671 - 1690

Опубликована: Март 1, 2022

We investigated the impact of hearing loss on neural processing speech. Using a forward modelling approach, we compared responses to continuous speech 14 adults with sensorineural those age-matched normal-hearing peers. Compared their peers, hearing-impaired listeners had increased tracking and delayed in quiet. The latency also degree loss. As understanding decreased, decreased both populations; however, significantly different trend was observed for responses. For listeners, increasing background noise level. However, this increase not observed. Our results support idea that response indicates efficiency processing: More or brain regions are involved speech, which causes longer communication pathways brain. These hamper information integration among these regions, reflected times. Altogether, suggests HI as more time required process suggest reduction occurs gradually deteriorates. From our results, it is apparent sound amplification does solve Even when listening silence at comfortable loudness, less efficiently.

Язык: Английский

Процитировано

39

Neural tracking of linguistic and acoustic speech representations decreases with advancing age DOI Creative Commons
Marlies Gillis, Jill Kries, Maaike Vandermosten

и другие.

NeuroImage, Год журнала: 2022, Номер 267, С. 119841 - 119841

Опубликована: Дек. 28, 2022

Background: Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms bottom-up acoustic analysis and top-down generation linguistic-based predictions. We studied natural across the adult lifespan via electroencephalography (EEG) measurements neural tracking. Goals: Our goals are to analyze unique contribution linguistic using while controlling for influence processing. Moreover, we also age. In particular, focus on changes spatial temporal activation patterns response lifespan. Methods: 52 normal-hearing between 17 82 years age listened a naturally spoken story EEG signal was recorded. investigated effect speech. Because correlated with hearing capacity measures cognition, whether observed mediated by these factors. Furthermore, there an hemisphere lateralization spatiotemporal responses. Results: results showed that declines advancing as increased, latency certain aspects increased. Also tracking (NT) decreased increasing age, which at odds literature. contrast processing, older subjects shorter latencies early responses No evidence found hemispheric neither younger nor during Most effects were explained age-related decline or cognition. However, our suggest decreasing word-level partially due cognition than robust Conclusion: Spatial characteristics change These may be traces structural and/or functional occurs

Язык: Английский

Процитировано

29

Cortical speech tracking is related to individual prediction tendencies DOI Creative Commons
Juliane Schubert, Fabian Schmidt, Quirin Gehmacher

и другие.

Cerebral Cortex, Год журнала: 2023, Номер 33(11), С. 6608 - 6619

Опубликована: Янв. 9, 2023

Listening can be conceptualized as a process of active inference, in which the brain forms internal models to integrate auditory information complex interaction bottom-up and top-down processes. We propose that individuals vary their "prediction tendency" this variation contributes experiential differences everyday listening situations shapes cortical processing acoustic input such speech. Here, we presented tone sequences varying entropy level, independently quantify prediction tendency (as anticipate low-level features) for each individual. This measure was then used predict speech tracking multi speaker task, where participants listened audiobooks narrated by target isolation or interfered 1 2 distractors. Furthermore, semantic violations were introduced into story, also examine effects word surprisal during processing. Our results show is related tendency. In addition, find interactions between background noise well disparate regions. findings suggest individual tendencies are generalizable across different may serve valuable element explain interindividual natural situations.

Язык: Английский

Процитировано

18

Eelbrain: A Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

и другие.

bioRxiv (Cold Spring Harbor Laboratory), Год журнала: 2021, Номер unknown

Опубликована: Авг. 3, 2021

1 Abstract Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: 1) Is there significant neural representation corresponding variable? And if so, 2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Язык: Английский

Процитировано

41

Neural tracking of the fundamental frequency of the voice: The effect of voice characteristics DOI
Jana Van Canneyt, Jan Wouters, Tom Francart

и другие.

European Journal of Neuroscience, Год журнала: 2021, Номер 53(11), С. 3640 - 3653

Опубликована: Апрель 17, 2021

Abstract Traditional electrophysiological methods to study temporal auditory processing of the fundamental frequency voice (f0) often use unnaturally repetitive stimuli. In this study, we investigated f0 meaningful continuous speech. EEG responses evoked by stories in quiet were analysed with a novel method based on linear modelling that characterizes neural tracking f0. We studied both strength and spatio‐temporal properties f0‐tracking response. Moreover, different samples speech (six four speakers: two male female) used investigate effect characteristics The results indicated response is inversely related rate change throughout story. As result, male‐narrated (low steady f0) stronger compared female‐narrated (high variable f0), for which many not significant. analysis revealed generators fixed brainstem but voice‐dependent as well. Voices high subcortically dominated latency between 7 12 ms. low are (latency 13–15 ms) cortically 23–26 generated, right primary cortex likely cortical source. Finally, additional experiments greatly improves voices strong higher harmonics, particularly useful boost small

Язык: Английский

Процитировано

38

Neural Markers of Speech Comprehension: Measuring EEG Tracking of Linguistic Speech Representations, Controlling the Speech Acoustics DOI Creative Commons
Marlies Gillis, Jonas Vanthornhout, Jonathan Z. Simon

и другие.

Journal of Neuroscience, Год журнала: 2021, Номер 41(50), С. 10316 - 10329

Опубликована: Ноя. 3, 2021

When listening to speech, our brain responses time lock acoustic events in the stimulus. Recent studies have also reported that cortical track linguistic representations of speech. However, tracking these is often described without controlling for properties. Therefore, response might reflect unaccounted processing rather than language processing. Here, we evaluated potential several recently proposed as neural markers speech comprehension. To do so, investigated EEG audiobook 29 participants (22 females). We examined whether contribute unique information over and beyond each other. Indeed, not all were significantly tracked after phoneme surprisal, cohort entropy, word frequency tested generality associated by training on one story testing another. In general, are similarly across different stories spoken readers. These results suggests characterize content speech.SIGNIFICANCE STATEMENT For clinical applications, it would be desirable develop a marker comprehension derived from continuous Such measure allow behavior-free evaluation understanding; this open doors toward better quantification understanding populations whom obtaining behavioral measures may difficult, such young children or people with cognitive impairments, targeted interventions fitting hearing devices.

Язык: Английский

Процитировано

35

Masking of the mouth area impairs reconstruction of acoustic speech features and higher-level segmentational features in the presence of a distractor speaker DOI Creative Commons
Chandra Leon Haider, Nina Suess, Anne Hauswald

и другие.

NeuroImage, Год журнала: 2022, Номер 252, С. 119044 - 119044

Опубликована: Фев. 28, 2022

Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In context of speech, confronted with degraded acoustic signal, congruent visual inputs promote comprehension. When this masked, speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels processing are affected under circumstances by occluding mouth area. To answer question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. half trials, target speaker wore (surgical) face mask, while measured brain activity normal hearing participants via magnetoencephalography (MEG). We additionally added distractor trials order to create ecologically difficult listening situation. A decoding model on clear AV was trained and used reconstruct crucial features each condition. found significant main effects masks reconstruction features, such as envelope spectral (i.e. pitch formant frequencies), higher level segmentation (phoneme word onsets) were especially impaired through situations. As surgical our study, only show mild acoustics, interpret findings result missing input. Our extend previous behavioural results, demonstrating complex contextual relevant information processing.

Язык: Английский

Процитировано

24