Neural Markers of Speech Comprehension: Measuring EEG Tracking of Linguistic Speech Representations, Controlling the Speech Acoustics DOI Creative Commons
Marlies Gillis, Jonas Vanthornhout, Jonathan Z. Simon

et al.

Journal of Neuroscience, Journal Year: 2021, Volume and Issue: 41(50), P. 10316 - 10329

Published: Nov. 3, 2021

When listening to speech, our brain responses time lock acoustic events in the stimulus. Recent studies have also reported that cortical track linguistic representations of speech. However, tracking these is often described without controlling for properties. Therefore, response might reflect unaccounted processing rather than language processing. Here, we evaluated potential several recently proposed as neural markers speech comprehension. To do so, investigated EEG audiobook 29 participants (22 females). We examined whether contribute unique information over and beyond each other. Indeed, not all were significantly tracked after phoneme surprisal, cohort entropy, word frequency tested generality associated by training on one story testing another. In general, are similarly across different stories spoken readers. These results suggests characterize content speech.SIGNIFICANCE STATEMENT For clinical applications, it would be desirable develop a marker comprehension derived from continuous Such measure allow behavior-free evaluation understanding; this open doors toward better quantification understanding populations whom obtaining behavioral measures may difficult, such young children or people with cognitive impairments, targeted interventions fitting hearing devices.

Language: Английский

Neural Entrainment and Attentional Selection in the Listening Brain DOI Open Access
Jonas Obleser, Christoph Kayser

Trends in Cognitive Sciences, Journal Year: 2019, Volume and Issue: 23(11), P. 913 - 926

Published: Oct. 9, 2019

Language: Английский

Citations

391

A hierarchy of linguistic predictions during natural language comprehension DOI Creative Commons
Micha Heilbron, Kristijan Armeni, Jan‐Mathijs Schoffelen

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2022, Volume and Issue: 119(32)

Published: Aug. 3, 2022

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction guide interpretation incoming input. However, role in processing remains disputed, with disagreement about both ubiquity and representational nature predictions. Here, we address issues by analyzing recordings participants listening audiobooks, using deep neural network (GPT-2) precisely quantify contextual First, establish responses words are modulated ubiquitous Next, disentangle model-based predictions distinct dimensions, revealing dissociable signatures syntactic category (parts speech), phonemes, semantics. Finally, show high-level (word) inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore processing, showing spontaneously predicts upcoming at multiple levels abstraction.

Language: Английский

Citations

229

A neural population selective for song in human auditory cortex DOI Creative Commons
Sam Norman-Haignere, Jenelle Feather, Dana Boebinger

et al.

Current Biology, Journal Year: 2022, Volume and Issue: 32(7), P. 1470 - 1484.e12

Published: Feb. 22, 2022

Language: Английский

Citations

95

Continuous speech processing DOI Creative Commons
Christian Brodbeck, Jonathan Z. Simon

Current Opinion in Physiology, Journal Year: 2020, Volume and Issue: 18, P. 25 - 31

Published: July 28, 2020

Language: Английский

Citations

121

Cortical encoding of melodic expectations in human temporal cortex DOI Creative Commons
Giovanni M. Di Liberto, Claire Pelofi, Roberta Bianco

et al.

eLife, Journal Year: 2020, Volume and Issue: 9

Published: Feb. 25, 2020

Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses confronts these expectations. Measuring correlates would represent direct window into high-level brain processing. Here we recorded cortical signals participants listened to Bach melodies. We assessed relative contributions of acoustic

Language: Английский

Citations

109

Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers DOI Creative Commons
Christian Brodbeck,

Alex Jiao,

L. Elliot Hong

et al.

PLoS Biology, Journal Year: 2020, Volume and Issue: 18(10), P. e3000883 - e3000883

Published: Oct. 22, 2020

Humans are remarkably skilled at listening to one speaker out of an acoustic mixture several speech sources. Two speakers easily segregated, even without binaural cues, but the neural mechanisms underlying this ability not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition mixture, allowing attended be reconstructed via optimally weighted recombinations discount regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses 2-talker we show evidence for alternative possibility, in which early, active segregation occurs strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) nonoverlapping features seen both talkers. When competing talkers' mask each other, individual representations persist, they occur with approximately 20-millisecond delay. This suggests auditory cortex recovers masked if occurred ignored speech. The existence such noise-robust representations, present as speech, stream process, could explain range behavioral effects background

Language: Английский

Citations

107

Delta- and theta-band cortical tracking and phase-amplitude coupling to sung speech by infants DOI Creative Commons
Adam Attaheri, Áine Ní Choisdealbha, Giovanni M. Di Liberto

et al.

NeuroImage, Journal Year: 2021, Volume and Issue: 247, P. 118698 - 118698

Published: Nov. 16, 2021

The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding at multiple time scales. Neurophysiological signals are known to track the adult-directed (ADS), particularly in theta-band. Acoustic analysis infant-directed (IDS) has revealed significantly greater modulation energy than ADS an amplitude-modulation (AM) band centred on ∼2 Hz. Accordingly, cortical tracking IDS by delta-band neural may be key language acquisition. Speech also contains within its higher-frequency bands (beta, gamma). Adult EEG and MEG studies reveal oscillatory hierarchy, whereby (delta, theta) phase dynamics temporally organize high-frequency (phase coupling, PAC). Whilst consensus is growing around role PAC matured adult brain, development processing unexplored. Here, we examined presence maturation (<12 Hz) infants recording longitudinally from 60 participants when aged 4-, 7- 11- months as they listened nursery rhymes. After establishing stimulus-related delta theta, each age was assessed delta, theta alpha [control] using a multivariate temporal response function (mTRF) method. Delta-beta, delta-gamma, theta-beta theta-gamma phase-amplitude coupling (PAC) assessed. Significant but not found. present all ages, with both -driven observed.

Language: Английский

Citations

96

An Estimation Method of Continuous Non-Invasive Arterial Blood Pressure Waveform Using Photoplethysmography: A U-Net Architecture-Based Approach DOI Creative Commons
Tasbiraha Athaya, Sunwoong Choi

Sensors, Journal Year: 2021, Volume and Issue: 21(5), P. 1867 - 1867

Published: March 7, 2021

Blood pressure (BP) monitoring has significant importance in the treatment of hypertension and different cardiovascular health diseases. As photoplethysmogram (PPG) signals can be recorded non-invasively, research been highly conducted to measure BP using PPG recently. In this paper, we propose a U-net deep learning architecture that uses fingertip signal as input estimate arterial (ABP) waveform non-invasively. From waveform, have also measured systolic (SBP), diastolic (DBP), mean (MAP). The proposed method was evaluated on subset 100 subjects from two publicly available databases: MIMIC MIMIC-III. predicted ABP waveforms correlated with reference obtained an average Pearson's correlation coefficient 0.993. absolute error is 3.68 ± 4.42 mmHg for SBP, 1.97 2.92 DBP, 2.17 3.06 MAP which satisfy requirements Association Advancement Medical Instrumentation (AAMI) standard obtain grade A according British Hypertension Society (BHS) standard. results show efficient process directly PPG.

Language: Английский

Citations

92

A model of listening engagement (MoLE) DOI
Björn Herrmann, Ingrid S. Johnsrude

Hearing Research, Journal Year: 2020, Volume and Issue: 397, P. 108016 - 108016

Published: June 19, 2020

Language: Английский

Citations

90

Neural tracking as a diagnostic tool to assess the auditory pathway DOI
Marlies Gillis, Jana Van Canneyt, Tom Francart

et al.

Hearing Research, Journal Year: 2022, Volume and Issue: 426, P. 108607 - 108607

Published: Sept. 14, 2022

Language: Английский

Citations

44