Auditory Word Comprehension Is Less Incremental in Isolated Words DOI Creative Commons
Phoebe Gaston, Christian Brodbeck, Colin Phillips

et al.

Neurobiology of Language, Journal Year: 2022, Volume and Issue: 4(1), P. 29 - 52

Published: Oct. 4, 2022

Partial speech input is often understood to trigger rapid and automatic activation of successively higher-level representations words, from sound meaning. Here we show evidence magnetoencephalography that this type incremental processing limited when words are heard in isolation as compared continuous speech. This suggests a less unified word recognition process than assumed. We present isolated neural effects phoneme probability, quantified by surprisal, significantly stronger (statistically null) phoneme-by-phoneme lexical uncertainty, cohort entropy. In contrast, find robust both entropy surprisal during perception connected speech, with significant interaction between the contexts. dissociation rules out models which common indicators uniform process, even though these closely related information-theoretic measures arise probability distribution wordforms consistent input. propose reflect access lower level representation auditory (e.g., wordforms) while occurrence task sensitive, driven competition or engaged late (or not at all) single words.

Language: Английский

Incorporating models of subcortical processing improves the ability to predict EEG responses to natural speech DOI Creative Commons

Elsa Lindboom,

Aaron Nidiffer, Laurel H. Carney

et al.

Hearing Research, Journal Year: 2023, Volume and Issue: 433, P. 108767 - 108767

Published: April 10, 2023

Language: Английский

Citations

7

Lexical surprisal shapes the time course of syntactic structure building DOI Creative Commons
Sophie Slaats, Antje S. Meyer, Andrea E. Martin

et al.

Neurobiology of Language, Journal Year: 2024, Volume and Issue: 5(4), P. 942 - 980

Published: Jan. 1, 2024

When we understand language, recognize words and combine them into sentences. In this article, explore the hypothesis that listeners use probabilistic information about to build syntactic structure. Recent work has shown lexical probability structure both modulate delta-band (<4 Hz) neural signal. Here, investigated whether encoding of changes as a function distributional properties word. To end, analyzed MEG data 24 native speakers Dutch who listened three fairytales with total duration 49 min. Using temporal response functions cumulative model-comparison approach, evaluated contributions features variance in This revealed surprisal values (a feature), well bottom-up node counts feature) positively contributed model Subsequently, compared responses feature between high- low-surprisal values. delay consequence value word: high-surprisal were associated delayed by 150-190 ms. The was not affected word duration, did have origin. These findings suggest brain uses infer structure, highlight an importance for role time process.

Language: Английский

Citations

2

Neural encoding of linguistic speech cues is unaffected by cognitive decline, but decreases with increasing hearing impairment DOI Creative Commons
Elena Bolt, Nathalie Giroud

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Aug. 17, 2024

The multivariate temporal response function (mTRF) is an effective tool for investigating the neural encoding of acoustic and complex linguistic features in natural continuous speech. In this study, we investigated how representations speech derived from stimuli are related to early signs cognitive decline older adults, taking into account effects hearing. Participants without (

Language: Английский

Citations

2

Auditory EEG decoding challenge for ICASSP 2024 DOI Creative Commons
Lies Bollens, Corentin Puffay, Bernd Accou

et al.

Published: Aug. 30, 2024

This paper describes the auditory EEG challenge, organized as one of Signal Processing Grand Challenges at ICASSP 2024. The challenge provides electroencephalogram (EEG) recordings 105 subjects who listened to continuous speech, audiobooks or podcasts, while their brain activity was recorded. consists two tasks that relate signals presented speech stimulus. first task, called match-mismatch, is determine which five segments induced a given segment. second regression, reconstruct Mel spectrogram from EEG. 85 were provided training set so participants could train models on relatively large dataset. remaining 20 used held-out for evaluation step challenge.

Language: Английский

Citations

2

Auditory Word Comprehension Is Less Incremental in Isolated Words DOI Creative Commons
Phoebe Gaston, Christian Brodbeck, Colin Phillips

et al.

Neurobiology of Language, Journal Year: 2022, Volume and Issue: 4(1), P. 29 - 52

Published: Oct. 4, 2022

Partial speech input is often understood to trigger rapid and automatic activation of successively higher-level representations words, from sound meaning. Here we show evidence magnetoencephalography that this type incremental processing limited when words are heard in isolation as compared continuous speech. This suggests a less unified word recognition process than assumed. We present isolated neural effects phoneme probability, quantified by surprisal, significantly stronger (statistically null) phoneme-by-phoneme lexical uncertainty, cohort entropy. In contrast, find robust both entropy surprisal during perception connected speech, with significant interaction between the contexts. dissociation rules out models which common indicators uniform process, even though these closely related information-theoretic measures arise probability distribution wordforms consistent input. propose reflect access lower level representation auditory (e.g., wordforms) while occurrence task sensitive, driven competition or engaged late (or not at all) single words.

Language: Английский

Citations

9