Detecting Post-Stroke Aphasia Via Brain Responses to Speech in a Deep Learning Framework DOI
Pieter De Clercq, Corentin Puffay, Jill Kries

et al.

Published: July 15, 2024

Aphasia, a language disorder primarily caused by stroke, is traditionally diagnosed using behavioral tests. However, these tests are time-consuming, require manual interpretation trained clinicians, suffer from low ecological validity, and diagnosis can be biased comorbid motor cognitive problems present in aphasia. In this study, we introduce an automated screening tool for speech processing impairments aphasia that relies on time-locked brain responses to speech, known as neural tracking, within deep learning framework. We modeled electroencephalography (EEG) acoustic, segmentation, linguistic representations of story convolutional networks large sample healthy participants, serving model intact tracking speech. Subsequently, evaluated our models independent comprising 26 individuals with (IWA) 22 controls. Our results reveal decreased all IWA. Utilizing support vector machine classifier measures input, demonstrate high accuracy detection at the individual level (85.42%) time-efficient manner (requiring 9 minutes EEG data). Given its robustness, time efficiency, generalizability unseen data, approach holds significant promise clinical applications.

Language: Английский

Neural tracking of linguistic and acoustic speech representations decreases with advancing age DOI Creative Commons
Marlies Gillis, Jill Kries, Maaike Vandermosten

et al.

NeuroImage, Journal Year: 2022, Volume and Issue: 267, P. 119841 - 119841

Published: Dec. 28, 2022

Background: Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms bottom-up acoustic analysis and top-down generation linguistic-based predictions. We studied natural across the adult lifespan via electroencephalography (EEG) measurements neural tracking. Goals: Our goals are to analyze unique contribution linguistic using while controlling for influence processing. Moreover, we also age. In particular, focus on changes spatial temporal activation patterns response lifespan. Methods: 52 normal-hearing between 17 82 years age listened a naturally spoken story EEG signal was recorded. investigated effect speech. Because correlated with hearing capacity measures cognition, whether observed mediated by these factors. Furthermore, there an hemisphere lateralization spatiotemporal responses. Results: results showed that declines advancing as increased, latency certain aspects increased. Also tracking (NT) decreased increasing age, which at odds literature. contrast processing, older subjects shorter latencies early responses No evidence found hemispheric neither younger nor during Most effects were explained age-related decline or cognition. However, our suggest decreasing word-level partially due cognition than robust Conclusion: Spatial characteristics change These may be traces structural and/or functional occurs

Language: Английский

Citations

29

Exploring Relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes DOI Creative Commons
Thorge Haupt, Marc Rosenkranz, Martin G. Bleichner

et al.

eNeuro, Journal Year: 2025, Volume and Issue: unknown, P. ENEURO.0287 - 24.2024

Published: Jan. 3, 2025

A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition information about environment. While extensive research has been dedicated to speech perception, complexities auditory within environments, specifically types and key features extract, remain less explored. Our study aims systematically investigate relevance different feature categories: discrete sound-identity markers, general cognitive state information, acoustic representations, including onset, envelope, mel-spectrogram. Using continuous data analysis, we contrast in terms their predictive power for unseen thus distinct contributions explaining neural data. For this, analyse from a complex audio-visual motor task naturalistic soundscape. The results demonstrated that sets explain most variability were combination highly detailed description specific onsets. Furthermore, it showed established applied soundscapes. Crucially, outcome hinged on excluding periods devoid onsets case features. highlights importance comprehensively describe soundscape, nonacoustic aspects, fully understand dynamics situations. This approach serve as foundation future studies aiming natural settings. Significance Statement is an important step our broader endeavor, which life. Although conducted stationary setting, this provides foundational insights into necessary environmental obtain responses. We delved various features, labeling, goal refining models related perception. findings particularly highlight need thorough considerations across contexts, laboratory settings mobile EEG technologies, paves way investigations more advancing field neuroscience.

Language: Английский

Citations

0

Heard or Understood? Neural Tracking of Language Features in a Comprehensible Story, an Incomprehensible Story and a Word List DOI Creative Commons
Marlies Gillis, Jonas Vanthornhout, Tom Francart

et al.

eNeuro, Journal Year: 2023, Volume and Issue: 10(7), P. ENEURO.0075 - 23.2023

Published: July 1, 2023

Speech comprehension is a complex neural process on which relies activation and integration of multiple brain regions. In the current study, we evaluated whether speech can be investigated by tracking. Neural tracking phenomenon in responses time-lock to rhythm specific features continuous speech. These acoustic, i.e., acoustic tracking, or derived from content using language properties, We differs between comprehensible story, an incomprehensible word list. 19 participants (six men). No significant difference regarding was found. However, only found for story. The most prominent effect visible surprisal, feature at level. response surprisal showed negativity 300 400 ms, similar N400 evoked paradigms. This significantly more negative when story comprehended, words could integrated context previous words. results show that capture comprehension.

Language: Английский

Citations

10

A listening advantage for native speech is reflected by attention-related activity in auditory cortex DOI Creative Commons

Meng Liang,

Johannes Gerwien, Alexander Gutschalk

et al.

Communications Biology, Journal Year: 2025, Volume and Issue: 8(1)

Published: Feb. 5, 2025

Language: Английский

Citations

0

Neural tracking of natural speech: an effective marker for post-stroke aphasia DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

et al.

Brain Communications, Journal Year: 2025, Volume and Issue: 7(2)

Published: Jan. 1, 2025

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. Behavioural tests are the current standard to detect but they time-consuming, have limited ecological validity and require active patient cooperation. To address these limitations, we tested potential EEG-based neural envelope tracking natural speech. The technique investigates response temporal speech, which is critical for speech understanding by encompassing cues detecting segmenting linguistic units (e.g. phrases, words phonemes). We recorded EEG 26 individuals with aphasia in chronic phase after stroke (>6 months post-stroke) 22 healthy controls while listened 25-min story. quantified broadband frequency range as well delta, theta, alpha, beta gamma bands using mutual information analyses. Besides group differences measures, also its suitability at individual level support vector machine classifier. further investigated reliability required recording length accurate detection. Our results showed had decreased encoding compared broad, theta bands, aligns assumed role auditory processing Neural effectively captured level, classification accuracy 83.33% an area under curve 89.16%. Moreover, demonstrated high-accuracy detection can be achieved time-efficient (5–7 min) highly reliable manner (split-half correlations between R = 0.61 0.96 across bands). In this study, identified specific characteristics impaired holding promise biomarker condition. Furthermore, demonstrate discriminate high accuracy, manner. findings represent significant advance towards more automated, objective ecologically valid assessments impairments aphasia.

Language: Английский

Citations

0

Selective attention and sensitivity to auditory disturbances in a virtually-real Classroom: Comparison of adults with and without AD(H)D DOI Open Access
Orel Levy,

Shirley Libman Hackmon,

Yair Zvilichovsky

et al.

Published: April 8, 2025

Many people, and particularly individuals with Attention Deficit (Hyperactivity) Disorder (AD(H)D), find it difficult to maintain attention during classroom learning. However, traditional paradigms used evaluate do not capture the complexity dynamic nature of real-life classrooms. Using a novel Virtual Reality platform, coupled measurement neural activity, eye-gaze skin conductance, here we studied neurophysiological manifestations distractibility, under realistic learning conditions. Individuals AD(H)D exhibited higher responses irrelevant sounds reduced speech tracking teacher, relative controls. Additional measures, such power alpha-oscillations frequency gaze-shifts away from contributed explaining variance in self-reported symptoms across sample. These ecologically-valid findings provide critical insight into mechanisms underlying individual differences capacity for sustained proneness distraction mind-wandering, experienced situations.

Language: Английский

Citations

0

Selective attention and sensitivity to auditory disturbances in a virtually real classroom DOI Creative Commons
Orel Levy,

Shirley Libman Hackmon,

Yair Zvilichovsky

et al.

eLife, Journal Year: 2025, Volume and Issue: 13

Published: May 12, 2025

Many people, and particularly individuals with attention deficit (hyperactivity) disorder (AD(H)D), find it difficult to maintain during classroom learning. However, traditional paradigms used evaluate do not capture the complexity dynamic nature of real-life classrooms. Using a novel virtual reality platform, coupled measurement neural activity, eye-gaze, skin conductance, here we studied neurophysiological manifestations distractibility, under realistic learning conditions. Individuals AD(H)D exhibited higher responses irrelevant sounds reduced speech tracking teacher, relative controls. Additional measures, such power alpha-oscillations frequency gaze-shifts away from contributed explaining variance in self-reported symptoms across sample. These ecologically valid findings provide critical insight into mechanisms underlying individual differences capacity for sustained proneness distraction mind-wandering, experienced situations.

Language: Английский

Citations

0

Dataset size considerations for robust acoustic and phonetic speech encoding models in EEG DOI Creative Commons
Maansi Desai, Alyssa M Field, Liberty S. Hamilton

et al.

Frontiers in Human Neuroscience, Journal Year: 2023, Volume and Issue: 16

Published: Jan. 20, 2023

In many experiments that investigate auditory and speech processing in the brain using electroencephalography (EEG), experimental paradigm is often lengthy tedious. Typically, experimenter errs on side of including more data, trials, therefore conducting a longer task to ensure data are robust effects measurable. Recent studies used naturalistic stimuli brain's response individual or combination multiple features system identification techniques, such as multivariate temporal receptive field (mTRF) analyses. The neural collected from must be divided into training set test fit validate mTRF weights. While good strategy clearly collect much feasible, it unclear how needed achieve stable results. Furthermore, whether specific stimulus for fitting choice feature representation affects would required generalizable Here, we previously EEG our lab sentence movie well an open-source dataset audiobook better understand needs measuring acoustic phonetic tuning. We found structure tested here stabilizes after collecting approximately 200 s TIMIT sentences, around 600 trailers 460 data. Thus, provide suggestions minimum amount necessary mTRFs listening Our findings motivated by highly practical concerns when working with children, patient populations, others who may not tolerate long study sessions. These will aid future researchers wish healthy clinical populations while minimizing participant fatigue retaining signal quality.

Language: Английский

Citations

7

Exploring relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes DOI Open Access
Thorge Haupt, Marc Rosenkranz, Martin G. Bleichner

et al.

Published: Jan. 29, 2024

A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition information about environment.While extensive research has been dedicated to speech perception, complexities auditory within environments, specifically types and key features extract, remain less explored. Our study aims systematically investigate relevance different feature categories: discrete sound-identity markers, general cognitive state information, acoustic representations, including onset, envelope, mel-spectrogram. Using continuous data analysis, we contrast methods in terms their predictive power for unseen data, distinct contributions explaining neural data. We also evaluate results considering impact context, here density events. For this, analyse from a complex audio-visual motor task naturalistic soundscape. The demonstrated that model prediction is increased more acoustically detailed conjunction description identity Crucially, outcome hinged on excluding periods devoid onsets case features. Furthermore, showed event was crucial when onsets. highlights importance soundscape, non-acoustic aspects, fully understand dynamics situations. This approach serve as foundation future studies aiming natural settings.

Language: Английский

Citations

2

Cortical linear encoding and decoding of sounds: Similarities and differences between naturalistic speech and music listening DOI Creative Commons
Adèle Simon, Søren Bech, Gérard Loquet

et al.

European Journal of Neuroscience, Journal Year: 2024, Volume and Issue: 59(8), P. 2059 - 2074

Published: Feb. 1, 2024

Linear models are becoming increasingly popular to investigate brain activity in response continuous and naturalistic stimuli. In the context of auditory perception, these predictive can be 'encoding', when stimulus features used reconstruct activity, or 'decoding' neural audio These linear a central component some brain-computer interfaces that integrated into hearing assistive devices (e.g., aids). Such advanced neurotechnologies have been widely investigated listening speech stimuli but rarely music. Recent attempts at tracking music show reconstruction performances reduced compared with decoding. The present study investigates performance electroencephalogram prediction (decoding encoding models) based on cortical entrainment temporal variations for both listening. Three hypotheses may explain differences between were tested assess importance speech-specific acoustic linguistic factors. While results obtained suggest different underlying processing listening, no found terms data. envelope-based modelling despite mechanisms.

Language: Английский

Citations

1