Neural Markers of Speech Comprehension: Measuring EEG Tracking of Linguistic Speech Representations, Controlling the Speech Acoustics DOI Creative Commons
Marlies Gillis, Jonas Vanthornhout, Jonathan Z. Simon

et al.

Journal of Neuroscience, Journal Year: 2021, Volume and Issue: 41(50), P. 10316 - 10329

Published: Nov. 3, 2021

When listening to speech, our brain responses time lock acoustic events in the stimulus. Recent studies have also reported that cortical track linguistic representations of speech. However, tracking these is often described without controlling for properties. Therefore, response might reflect unaccounted processing rather than language processing. Here, we evaluated potential several recently proposed as neural markers speech comprehension. To do so, investigated EEG audiobook 29 participants (22 females). We examined whether contribute unique information over and beyond each other. Indeed, not all were significantly tracked after phoneme surprisal, cohort entropy, word frequency tested generality associated by training on one story testing another. In general, are similarly across different stories spoken readers. These results suggests characterize content speech.SIGNIFICANCE STATEMENT For clinical applications, it would be desirable develop a marker comprehension derived from continuous Such measure allow behavior-free evaluation understanding; this open doors toward better quantification understanding populations whom obtaining behavioral measures may difficult, such young children or people with cognitive impairments, targeted interventions fitting hearing devices.

Language: Английский

Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience DOI Creative Commons
Kristin Weineck,

Olivia Xin Wen,

Molly J. Henry

et al.

eLife, Journal Year: 2022, Volume and Issue: 11

Published: Sept. 12, 2022

Neural activity in the auditory system synchronizes to sound rhythms, and brain-environment synchronization is thought be fundamental successful perception. Sound rhythms are often operationalized terms of sound's amplitude envelope. We hypothesized that - especially for music envelope might not best capture complex spectro-temporal fluctuations give rise beat perception synchronized neural activity. This study investigated (1) different musical features, (2) tempo-dependence synchronization, (3) dependence on familiarity, enjoyment, ease In this electroencephalography study, 37 human participants listened tempo-modulated (1-4 Hz). Independent whether analysis approach was based temporal response functions (TRFs) or reliable components (RCA), spectral flux as opposed evoked strongest synchronization. Moreover, with slower rates, high easy-to-perceive beats elicited response. Our results demonstrate importance driving highlight its sensitivity tempo, salience.

Language: Английский

Citations

41

Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

et al.

eLife, Journal Year: 2023, Volume and Issue: 12

Published: Nov. 29, 2023

Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group-level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: (1) Is there significant neural representation corresponding variable? And if so, (2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Language: Английский

Citations

40

Joint, distributed and hierarchically organized encoding of linguistic features in the human auditory cortex DOI
Menoua Keshishian, Serdar Akkol, Jose L. Herrero

et al.

Nature Human Behaviour, Journal Year: 2023, Volume and Issue: 7(5), P. 740 - 753

Published: March 2, 2023

Language: Английский

Citations

26

Vowel and formant representation in the human auditory speech cortex DOI Creative Commons
Yulia Oganian, Ilina Bhaya-Grossman, Keith Johnson

et al.

Neuron, Journal Year: 2023, Volume and Issue: 111(13), P. 2105 - 2118.e4

Published: April 26, 2023

Language: Английский

Citations

23

Rapid computations of spectrotemporal prediction error support perception of degraded speech DOI Creative Commons
Ediz Sohoglu, Matthew H. Davis

eLife, Journal Year: 2020, Volume and Issue: 9

Published: Nov. 4, 2020

Human speech perception can be described as Bayesian perceptual inference but how are these computations instantiated neurally? We used magnetoencephalographic recordings of brain responses to degraded spoken words and experimentally manipulated signal quality prior knowledge. first demonstrate that spectrotemporal modulations in more strongly represented neural than alternative representations (e.g. spectrogram or articulatory features). Critically, we found an interaction between expectations from written text on the representations; increased enhanced mismatched with expectations, led greater suppression matched expectations. This is a unique signature prediction error apparent within 100 ms input. Our findings contribute detailed specification computational model based predictive coding frameworks.

Language: Английский

Citations

56

Neurophysiological Indices of Audiovisual Speech Processing Reveal a Hierarchy of Multisensory Integration Effects DOI Open Access
Aisling E. O’Sullivan, Michael J. Crosse, Giovanni M. Di Liberto

et al.

Journal of Neuroscience, Journal Year: 2021, Volume and Issue: 41(23), P. 4991 - 5003

Published: April 6, 2021

Seeing a speaker9s face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory at multiple stages processing, whereby movement provides temporal cues cortex, articulatory information mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how these varies as function Here, we sought provide insight on questions by examining EEG responses humans (males females) natural audiovisual (AV), audio, quiet noise. We represented our stimuli terms their spectrograms phonetic features then quantified strength encoding those using canonical correlation analysis (CCA). The both spectrotemporal was shown be more robust AV than what would have been expected summation audio responses, suggesting that multisensory occurs processing. also found evidence suggest effects may change with conditions; however, this an exploratory future work will required examine effect within-subject design. These findings demonstrate along processing hierarchy. SIGNIFICANCE STATEMENT During conversation, impact perception speech. Integration occur vary flexibly depending (AV) two spectrogram representation, test adapts degraded find significant regardless reveal indices interactions different support for multistage framework.

Language: Английский

Citations

42

Degrees of algorithmic equivalence between the brain and its DNN models DOI
Philippe G. Schyns, Lukas Snoek, Christoph Daube

et al.

Trends in Cognitive Sciences, Journal Year: 2022, Volume and Issue: 26(12), P. 1090 - 1102

Published: Oct. 7, 2022

Language: Английский

Citations

31

Neural tracking of linguistic and acoustic speech representations decreases with advancing age DOI Creative Commons
Marlies Gillis, Jill Kries, Maaike Vandermosten

et al.

NeuroImage, Journal Year: 2022, Volume and Issue: 267, P. 119841 - 119841

Published: Dec. 28, 2022

Background: Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms bottom-up acoustic analysis and top-down generation linguistic-based predictions. We studied natural across the adult lifespan via electroencephalography (EEG) measurements neural tracking. Goals: Our goals are to analyze unique contribution linguistic using while controlling for influence processing. Moreover, we also age. In particular, focus on changes spatial temporal activation patterns response lifespan. Methods: 52 normal-hearing between 17 82 years age listened a naturally spoken story EEG signal was recorded. investigated effect speech. Because correlated with hearing capacity measures cognition, whether observed mediated by these factors. Furthermore, there an hemisphere lateralization spatiotemporal responses. Results: results showed that declines advancing as increased, latency certain aspects increased. Also tracking (NT) decreased increasing age, which at odds literature. contrast processing, older subjects shorter latencies early responses No evidence found hemispheric neither younger nor during Most effects were explained age-related decline or cognition. However, our suggest decreasing word-level partially due cognition than robust Conclusion: Spatial characteristics change These may be traces structural and/or functional occurs

Language: Английский

Citations

29

A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension DOI Creative Commons
Filiz Tezcan, Hugo Weissbart, Andrea E. Martin

et al.

eLife, Journal Year: 2023, Volume and Issue: 12

Published: July 7, 2023

When we comprehend language from speech, the phase of neural response aligns with particular features speech input, resulting in a phenomenon referred to as

Language: Английский

Citations

19

Emergence of the cortical encoding of phonetic features in the first year of life DOI Creative Commons
Giovanni M. Di Liberto, Adam Attaheri, Giorgia Cantisani

et al.

Nature Communications, Journal Year: 2023, Volume and Issue: 14(1)

Published: Dec. 1, 2023

Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed behavioural investigations, likely rely on increasingly neural underpinnings. The infant brain is known robustly track the envelope, however previous cortical tracking studies were unable demonstrate presence phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses nursery rhymes investigate encoding features in longitudinal cohort when aged 4, 7 and 11 months, as well adults. analyses reveal an detailed acoustically invariant emerging over year life, providing neurophysiological evidence that pre-verbal human cortex learns categories. By contrast, found no credible for age-related increases acoustic spectrogram.

Language: Английский

Citations

17