A comparison of EEG encoding models using audiovisual stimuli and their unimodal counterparts DOI Creative Commons
Maansi Desai, Alyssa M Field, Liberty S. Hamilton

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Nov. 17, 2023

Abstract Communication in the real world is inherently multimodal. When having a conversation, typically sighted and hearing people use both auditory visual cues to understand one another. For example, objects may make sounds as they move space, or we movement of person’s mouth better what are saying noisy environment. Still, many neuroscience experiments rely on unimodal stimuli (visual only only) encoding sensory features brain. The extent which information influence vice versa natural environments thus unclear. Here, addressed this question by recording scalp electroencephalography (EEG) 11 subjects listened watched movie trailers audiovisual (AV), (V) only, audio (A) conditions. We then fit linear models that described relationship between brain responses acoustic, phonetic, stimuli. also compared whether feature tuning was same when were presented original AV format versus removed. found similar A-only conditions, similarly, for with present (AV) removed (V only). In cross prediction analysis, investigated trained data predicted A V test well using conditions training. Overall, performance training sets sets, suggesting has relatively smaller effect EEG. contrast, set slightly worse than matching sets. This suggests stronger EEG, though makes no qualitative difference derived tuning. effect, our results show researchers benefit from richness multimodal datasets, can be used answer more research question.

Language: Английский

Detecting post-stroke aphasia using EEG-based neural envelope tracking of natural speech DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

et al.

medRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: March 17, 2023

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. The standard behavioral tests used to diagnose aphasia are time-consuming, require subjective interpretation, and have low ecological validity. As consequence, comorbid cognitive problems present in individuals with (IWA) can bias test results, generating discrepancy between outcomes everyday-life abilities. Neural tracking the speech envelope is promising tool for investigating brain responses natural speech. crucial understanding, encompassing cues detecting segmenting linguistic units, e.g., phrases, words phonemes. In this study, we aimed potential neural technique impairments IWA. We recorded EEG 27 IWA chronic phase after stroke 22 healthy controls while they listened 25-minute story. quantified broadband frequency range as well delta, theta, alpha, beta, gamma bands using mutual information analysis. Besides group differences measures, also tested its suitability at individual level Support Vector Machine (SVM) classifier. further investigated required recording length SVM detect obtain reliable outcomes. displayed decreased compared broad, band, which line assumed role these auditory pro-cessing effectively captured level, an accuracy 84% area under curve 88%. Moreover, demonstrated high-accuracy detection be achieved time-efficient (5 minutes) highly manner (split-half reliability correlations R=0.62 R=0.96 across bands). Our study shows effective biomarker post-stroke aphasia. diagnostic high reliability, individual-level assessment. This work represents significant step towards more automatic, objective, ecologically valid assessments

Language: Английский

Citations

12

Neural tracking of natural speech: an effective marker for post-stroke aphasia DOI Creative Commons
Pieter De Clercq, Jill Kries, Ramtin Mehraram

et al.

Brain Communications, Journal Year: 2025, Volume and Issue: 7(2)

Published: Jan. 1, 2025

Abstract After a stroke, approximately one-third of patients suffer from aphasia, language disorder that impairs communication ability. Behavioural tests are the current standard to detect but they time-consuming, have limited ecological validity and require active patient cooperation. To address these limitations, we tested potential EEG-based neural envelope tracking natural speech. The technique investigates response temporal speech, which is critical for speech understanding by encompassing cues detecting segmenting linguistic units (e.g. phrases, words phonemes). We recorded EEG 26 individuals with aphasia in chronic phase after stroke (>6 months post-stroke) 22 healthy controls while listened 25-min story. quantified broadband frequency range as well delta, theta, alpha, beta gamma bands using mutual information analyses. Besides group differences measures, also its suitability at individual level support vector machine classifier. further investigated reliability required recording length accurate detection. Our results showed had decreased encoding compared broad, theta bands, aligns assumed role auditory processing Neural effectively captured level, classification accuracy 83.33% an area under curve 89.16%. Moreover, demonstrated high-accuracy detection can be achieved time-efficient (5–7 min) highly reliable manner (split-half correlations between R = 0.61 0.96 across bands). In this study, identified specific characteristics impaired holding promise biomarker condition. Furthermore, demonstrate discriminate high accuracy, manner. findings represent significant advance towards more automated, objective ecologically valid assessments impairments aphasia.

Language: Английский

Citations

0

Exploring relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes DOI Open Access
Thorge Haupt, Marc Rosenkranz, Martin G. Bleichner

et al.

Published: Jan. 29, 2024

A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition information about environment.While extensive research has been dedicated to speech perception, complexities auditory within environments, specifically types and key features extract, remain less explored. Our study aims systematically investigate relevance different feature categories: discrete sound-identity markers, general cognitive state information, acoustic representations, including onset, envelope, mel-spectrogram. Using continuous data analysis, we contrast methods in terms their predictive power for unseen data, distinct contributions explaining neural data. We also evaluate results considering impact context, here density events. For this, analyse from a complex audio-visual motor task naturalistic soundscape. The demonstrated that model prediction is increased more acoustically detailed conjunction description identity Crucially, outcome hinged on excluding periods devoid onsets case features. Furthermore, showed event was crucial when onsets. highlights importance soundscape, non-acoustic aspects, fully understand dynamics situations. This approach serve as foundation future studies aiming natural settings.

Language: Английский

Citations

2

A comparison of EEG encoding models using audiovisual stimuli and their unimodal counterparts DOI Creative Commons
Maansi Desai, Alyssa M Field, Liberty S. Hamilton

et al.

PLoS Computational Biology, Journal Year: 2024, Volume and Issue: 20(9), P. e1012433 - e1012433

Published: Sept. 9, 2024

Communication in the real world is inherently multimodal. When having a conversation, typically sighted and hearing people use both auditory visual cues to understand one another. For example, objects may make sounds as they move space, or we movement of person’s mouth better what are saying noisy environment. Still, many neuroscience experiments rely on unimodal stimuli encoding sensory features brain. The extent which information influence vice versa natural environments thus unclear. Here, addressed this question by recording scalp electroencephalography (EEG) 11 subjects listened watched movie trailers audiovisual (AV), (V) only, audio (A) only conditions. We then fit linear models that described relationship between brain responses acoustic, phonetic, stimuli. also compared whether feature tuning was same when were presented original AV format versus removed. In these stimuli, relatively uncorrelated, included spoken narration over scene well animated live-action characters talking with without their face visible. stimulus, found similar A-only conditions, similarly, for present (AV) removed (V only). cross prediction analysis, investigated trained data predicted A V test similarly data. Overall, performance using training sets sets, suggesting has smaller effect EEG. contrast, set slightly worse than matching sets. This suggests stronger EEG, though makes no qualitative difference derived tuning. effect, our results show researchers benefit from richness multimodal datasets, can be used answer more research question.

Language: Английский

Citations

2

Cortical linear encoding and decoding of sounds: Similarities and differences between naturalistic speech and music listening DOI Creative Commons
Adèle Simon, Søren Bech, Gérard Loquet

et al.

European Journal of Neuroscience, Journal Year: 2024, Volume and Issue: 59(8), P. 2059 - 2074

Published: Feb. 1, 2024

Linear models are becoming increasingly popular to investigate brain activity in response continuous and naturalistic stimuli. In the context of auditory perception, these predictive can be 'encoding', when stimulus features used reconstruct activity, or 'decoding' neural audio These linear a central component some brain-computer interfaces that integrated into hearing assistive devices (e.g., aids). Such advanced neurotechnologies have been widely investigated listening speech stimuli but rarely music. Recent attempts at tracking music show reconstruction performances reduced compared with decoding. The present study investigates performance electroencephalogram prediction (decoding encoding models) based on cortical entrainment temporal variations for both listening. Three hypotheses may explain differences between were tested assess importance speech-specific acoustic linguistic factors. While results obtained suggest different underlying processing listening, no found terms data. envelope-based modelling despite mechanisms.

Language: Английский

Citations

1

Neural tracking of the speech envelope predicts binaural unmasking DOI
Benjamin Dieudonné, Lien Decruy, Jonas Vanthornhout

et al.

European Journal of Neuroscience, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 9, 2024

Abstract Binaural unmasking is a remarkable phenomenon that it substantially easier to detect signal in noise when the interaural parameters of are different from those – useful mechanism so‐called cocktail party scenarios. In this study, we investigated effect binaural on neural tracking speech envelope. We measured EEG 8 participants who listened at fixed signal‐to‐noise ratio, two conditions: one where and had same phase difference (both having an opposite waveform across ears, SπNπ ), was (only SπN ). clear benefit behavioural understanding scores, accompanied by increased Moreover, analysing temporal response functions revealed also resulted decreased peak latencies amplitudes. Our results consistent with previous research using auditory evoked potentials steady‐state responses quantify cortical levels. they confirm associated understanding, even if acoustic ratio kept constant. From clinical perspective, these offer potential for objective evaluation mechanisms, detection pathologies sensitive processing, such as asymmetric hearing loss, neuropathy age‐related deficits.

Language: Английский

Citations

0

A comparison of EEG encoding models using audiovisual stimuli and their unimodal counterparts DOI Creative Commons
Maansi Desai, Alyssa M Field, Liberty S. Hamilton

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Nov. 17, 2023

Abstract Communication in the real world is inherently multimodal. When having a conversation, typically sighted and hearing people use both auditory visual cues to understand one another. For example, objects may make sounds as they move space, or we movement of person’s mouth better what are saying noisy environment. Still, many neuroscience experiments rely on unimodal stimuli (visual only only) encoding sensory features brain. The extent which information influence vice versa natural environments thus unclear. Here, addressed this question by recording scalp electroencephalography (EEG) 11 subjects listened watched movie trailers audiovisual (AV), (V) only, audio (A) conditions. We then fit linear models that described relationship between brain responses acoustic, phonetic, stimuli. also compared whether feature tuning was same when were presented original AV format versus removed. found similar A-only conditions, similarly, for with present (AV) removed (V only). In cross prediction analysis, investigated trained data predicted A V test well using conditions training. Overall, performance training sets sets, suggesting has relatively smaller effect EEG. contrast, set slightly worse than matching sets. This suggests stronger EEG, though makes no qualitative difference derived tuning. effect, our results show researchers benefit from richness multimodal datasets, can be used answer more research question.

Language: Английский

Citations

0