Auditory Word Comprehension Is Less Incremental in Isolated Words DOI Creative Commons
Phoebe Gaston, Christian Brodbeck, Colin Phillips

et al.

Neurobiology of Language, Journal Year: 2022, Volume and Issue: 4(1), P. 29 - 52

Published: Oct. 4, 2022

Partial speech input is often understood to trigger rapid and automatic activation of successively higher-level representations words, from sound meaning. Here we show evidence magnetoencephalography that this type incremental processing limited when words are heard in isolation as compared continuous speech. This suggests a less unified word recognition process than assumed. We present isolated neural effects phoneme probability, quantified by surprisal, significantly stronger (statistically null) phoneme-by-phoneme lexical uncertainty, cohort entropy. In contrast, find robust both entropy surprisal during perception connected speech, with significant interaction between the contexts. dissociation rules out models which common indicators uniform process, even though these closely related information-theoretic measures arise probability distribution wordforms consistent input. propose reflect access lower level representation auditory (e.g., wordforms) while occurrence task sensitive, driven competition or engaged late (or not at all) single words.

Language: Английский

A hierarchy of linguistic predictions during natural language comprehension DOI Creative Commons
Micha Heilbron, Kristijan Armeni, Jan‐Mathijs Schoffelen

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2022, Volume and Issue: 119(32)

Published: Aug. 3, 2022

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction guide interpretation incoming input. However, role in processing remains disputed, with disagreement about both ubiquity and representational nature predictions. Here, we address issues by analyzing recordings participants listening audiobooks, using deep neural network (GPT-2) precisely quantify contextual First, establish responses words are modulated ubiquitous Next, disentangle model-based predictions distinct dimensions, revealing dissociable signatures syntactic category (parts speech), phonemes, semantics. Finally, show high-level (word) inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore processing, showing spontaneously predicts upcoming at multiple levels abstraction.

Language: Английский

Citations

229

Parallel processing in speech perception with local and global representations of linguistic context DOI Creative Commons
Christian Brodbeck, Shohini Bhattasali,

Aura AL Cruz Heredia

et al.

eLife, Journal Year: 2022, Volume and Issue: 11

Published: Jan. 21, 2022

Speech processing is highly incremental. It widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive integrated with bottom-up sensory input: Classic psycholinguistic paradigms suggest two-stage process, in which acoustic input initially leads local, context-independent representations, are then quickly contextual constraints. This contrasts view brain constructs single coherent, unified interpretation input, fully integrates available information across representational hierarchies, thus uses constraints modulate even earliest representations. To distinguish these hypotheses, we tested magnetoencephalography responses continuous narrative speech for signatures local models. Results provide employ both types parallel. Two uniquely predict some part early neural responses, one based on sublexical phoneme sequences, phonemes current word alone; at same time, also reflect model incorporates sentence-level Neural source localization places anatomical origins different nonidentical parts superior temporal lobes bilaterally, right hemisphere showing relative preference more These results recruits parallel, reconciling disparate findings. Parallel might make perceptual system robust, facilitate unexpected inputs, serve function language acquisition.

Language: Английский

Citations

67

Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions DOI Creative Commons
Christian Brodbeck, Proloy Das, Marlies Gillis

et al.

eLife, Journal Year: 2023, Volume and Issue: 12

Published: Nov. 29, 2023

Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind analysis easy accessible. We demonstrate its use, continuous sample paradigm, with freely available EEG dataset audiobook listening. A companion GitHub repository provides complete source code analysis, from raw data group-level statistics. More generally, advocate hypothesis-driven approach experimenter specifies hierarchy time-continuous representations that are hypothesized contributed responses, uses those predictor variables signal. This analogous multiple problem, addition time dimension. TRF decomposes associated different by estimating multivariate (mTRF), quantifying influence each on function time(-lags). allows asking two questions about variables: (1) Is there significant neural representation corresponding variable? And if so, (2) what characteristics it? Thus, can be systematically combined evaluated jointly model processing at levels. discuss applications approach, including potential linking algorithmic/representational theories through computational appropriate hypotheses.

Language: Английский

Citations

40

Speech Understanding Oppositely Affects Acoustic and Linguistic Neural Tracking in a Speech Rate Manipulation Paradigm DOI Open Access
Eline Verschueren, Marlies Gillis, Lien Decruy

et al.

Journal of Neuroscience, Journal Year: 2022, Volume and Issue: 42(39), P. 7442 - 7453

Published: Aug. 30, 2022

When listening to continuous speech, the human brain can track features of presented speech signal. It has been shown that neural tracking acoustic is a prerequisite for understanding and predict in controlled circumstances. However, also tracks linguistic which may be more directly related understanding. We investigated processing as function varying by manipulating rate. In this paradigm, affected simultaneously but opposite directions: rate increases, information per second present. contrast, becomes challenging when less intelligible at higher rates. measured EEG 18 participants (4 male) who listened various As expected confirmed behavioral results, decreased with increasing Accordingly, rate, increased. This indicates representations capture gradual effect decreasing addition, increased does not necessarily imply better suggests that, although measure because low signal-to-noise ratio, direct predictor SIGNIFICANCE STATEMENT An increasingly popular method investigate tracking. Although much research done on how features, have received attention. study, we disentangled characteristics via A proper way objectively measuring auditory language paves toward clinical applications: objective would allow behavioral-free evaluation understanding, allows evaluate hearing loss adjust aids based responses. benefit populations from whom obtaining measures complex, such young children or people cognitive impairments.

Language: Английский

Citations

39

Decoding of the speech envelope from EEG using the VLAAI deep neural network DOI Creative Commons
Bernd Accou, Jonas Vanthornhout, Hugo Van hamme

et al.

Scientific Reports, Journal Year: 2023, Volume and Issue: 13(1)

Published: Jan. 16, 2023

Abstract To investigate the processing of speech in brain, commonly simple linear models are used to establish a relationship between brain signals and features. However, these ill-equipped model highly-dynamic, complex non-linear system like they often require substantial amount subject-specific training data. This work introduces novel decoder architecture: Very Large Augmented Auditory Inference (VLAAI) network. The VLAAI network outperformed state-of-the-art subject-independent (median Pearson correlation 0.19, p < 0.001), yielding an increase over well-established by 52%. Using ablation techniques, we identified relative importance each part found that components output context module influenced performance most (10% increase). Subsequently, was evaluated on holdout dataset 26 subjects publicly available unseen test generalization for stimuli. No significant difference default subjects, set public dataset. also significantly all baseline We effect size data from 1 up 80 revealing following hyperbolic tangent function number subjects. Finally, finetuned obtain models. With 5 minutes or more, improvement found, 34% (from 0.18 0.25 median correlation) with regards

Language: Английский

Citations

36

Exploring neural tracking of acoustic and linguistic speech representations in individuals with post‐stroke aphasia DOI Creative Commons
Jill Kries, Pieter De Clercq, Marlies Gillis

et al.

Human Brain Mapping, Journal Year: 2024, Volume and Issue: 45(8)

Published: May 26, 2024

Abstract Aphasia is a communication disorder that affects processing of language at different levels (e.g., acoustic, phonological, semantic). Recording brain activity via Electroencephalography while people listen to continuous story allows analyze responses acoustic and linguistic properties speech. When the neural aligns with these speech properties, it referred as tracking. Even though measuring tracking may present an interesting approach studying aphasia in ecologically valid way, has not yet been investigated individuals stroke‐induced aphasia. Here, we explored representations chronic phase after stroke age‐matched healthy controls. We found decreased (envelope envelope onsets) In addition, word surprisal displayed amplitudes around 195 ms over frontal electrodes, although this effect was corrected for multiple comparisons. These results show there potential capture impairments by However, more research needed validate results. Nonetheless, exploratory study shows naturalistic, presents powerful

Language: Английский

Citations

14

Subcortical responses to music and speech are alike while cortical responses diverge DOI Creative Commons
Shan Tong, Madeline S. Cappelloni, Ross K. Maddox

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Jan. 8, 2024

Music and speech are encountered daily unique to human beings. Both transformed by the auditory pathway from an initial acoustical encoding higher level cognition. Studies of cortex have revealed distinct brain responses music speech, but differences may emerge in or be inherited different subcortical encoding. In first part this study, we derived brainstem response (ABR), a measure encoding, recorded using two analysis methods. The method, described previously acoustically based, yielded very ABRs between sound classes. second however, developed here based on physiological model periphery, gave highly correlated speech. We determined superiority method through several metrics, suggesting there is no appreciable impact stimulus class (i.e., vs speech) way acoustics encoded subcortically. study's part, considered cortex. Our new resulted cortical becoming more similar with remaining differences. results taken together suggest that evidence for stimulus-class dependent processing at not level.

Language: Английский

Citations

12

Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies DOI Creative Commons
Nathaniel J. Zuk, Jeremy W. Murphy, Richard B. Reilly

et al.

PLoS Computational Biology, Journal Year: 2021, Volume and Issue: 17(9), P. e1009358 - e1009358

Published: Sept. 17, 2021

The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding higher-order features one’s cognitive state. Comparing neural tracking music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences their envelope spectra. Here, we use a novel method frequency-constrained reconstruction stimulus using EEG recorded during passive listening. We expected see match narrow range frequencies, instead found that was reconstructed better than for all frequencies examined. Additionally, models trained on types performed as well or stimulus-specific at higher modulation suggesting common mechanism music. However, low below 1 Hz, associated with increased weighting over parietal channels, not present other stimuli. Our results highlight importance low-frequency suggest an origin from speech-specific brain.

Language: Английский

Citations

48

Research Hotspots and Trends of Peripheral Nerve Injuries Based on Web of Science From 2017 to 2021: A Bibliometric Analysis DOI Creative Commons

Shiwen Zhang,

Meiling Huang,

Jincao Zhi

et al.

Frontiers in Neurology, Journal Year: 2022, Volume and Issue: 13

Published: May 20, 2022

Peripheral nerve injury (PNI) is very common in clinical practice, which often reduces the quality of life patients and imposes a serious medical burden on society. However, to date, there have been no bibliometric analyses PNI field from 2017 2021. This study aimed provide comprehensive overview current state research frontier trends perspective.Articles reviews 2021 were extracted Web Science database. An online platform, CiteSpace, VOSviewer software used generate viewable views perform co-occurrence analysis, co-citation burst analysis. The quantitative indicators such as number publications, citation frequency, h-index, impact factor journals analyzed by using functions "Create Citation Report" "Journal Reports" Database Excel software.A total 4,993 papers was identified. annual publications remained high, with an average more than 998 per year. citations increased year year, high 22,272 United States China had significant influence field. Johns Hopkins University, USA leading position this JESSEN KR JOURNAL OF NEUROSCIENCE most influential authors field, respectively. Meanwhile, we found that hot topics focused dorsal root ganglion (DRG) satellite glial cells (SGCs) for neuropathic pain relief combining tissue engineering techniques controlling repair Schwann cell phenotype promote regeneration, are not only focus now but also forecast be continued future.This first conduct analysis related 2021, whose results can reliable source researchers quickly understand key information identify potential frontiers directions.

Language: Английский

Citations

29

Neural tracking of linguistic and acoustic speech representations decreases with advancing age DOI Creative Commons
Marlies Gillis, Jill Kries, Maaike Vandermosten

et al.

NeuroImage, Journal Year: 2022, Volume and Issue: 267, P. 119841 - 119841

Published: Dec. 28, 2022

Background: Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms bottom-up acoustic analysis and top-down generation linguistic-based predictions. We studied natural across the adult lifespan via electroencephalography (EEG) measurements neural tracking. Goals: Our goals are to analyze unique contribution linguistic using while controlling for influence processing. Moreover, we also age. In particular, focus on changes spatial temporal activation patterns response lifespan. Methods: 52 normal-hearing between 17 82 years age listened a naturally spoken story EEG signal was recorded. investigated effect speech. Because correlated with hearing capacity measures cognition, whether observed mediated by these factors. Furthermore, there an hemisphere lateralization spatiotemporal responses. Results: results showed that declines advancing as increased, latency certain aspects increased. Also tracking (NT) decreased increasing age, which at odds literature. contrast processing, older subjects shorter latencies early responses No evidence found hemispheric neither younger nor during Most effects were explained age-related decline or cognition. However, our suggest decreasing word-level partially due cognition than robust Conclusion: Spatial characteristics change These may be traces structural and/or functional occurs

Language: Английский

Citations

29