Sustained responses and neural synchronization to amplitude and frequency modulation in sound change with age DOI
Björn Herrmann, Burkhard Maeß, Ingrid S. Johnsrude

et al.

Hearing Research, Journal Year: 2022, Volume and Issue: 428, P. 108677 - 108677

Published: Dec. 17, 2022

Language: Английский

Masking of the mouth area impairs reconstruction of acoustic speech features and higher-level segmentational features in the presence of a distractor speaker DOI Creative Commons
Chandra Leon Haider, Nina Suess, Anne Hauswald

et al.

NeuroImage, Journal Year: 2022, Volume and Issue: 252, P. 119044 - 119044

Published: Feb. 28, 2022

Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In context of speech, confronted with degraded acoustic signal, congruent visual inputs promote comprehension. When this masked, speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels processing are affected under circumstances by occluding mouth area. To answer question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. half trials, target speaker wore (surgical) face mask, while measured brain activity normal hearing participants via magnetoencephalography (MEG). We additionally added distractor trials order to create ecologically difficult listening situation. A decoding model on clear AV was trained and used reconstruct crucial features each condition. found significant main effects masks reconstruction features, such as envelope spectral (i.e. pitch formant frequencies), higher level segmentation (phoneme word onsets) were especially impaired through situations. As surgical our study, only show mild acoustics, interpret findings result missing input. Our extend previous behavioural results, demonstrating complex contextual relevant information processing.

Language: Английский

Citations

24

More than words: Neurophysiological correlates of semantic dissimilarity depend on comprehension of the speech narrative DOI
Michael P. Broderick, Nathaniel J. Zuk, Andrew Anderson

et al.

European Journal of Neuroscience, Journal Year: 2022, Volume and Issue: 56(8), P. 5201 - 5214

Published: Aug. 22, 2022

Abstract Speech comprehension relies on the ability to understand words within a coherent context. Recent studies have attempted obtain electrophysiological indices of this process by modelling how brain activity is affected word's semantic dissimilarity preceding words. Although resulting appear robust and are strongly modulated attention, it remains possible that, rather than capturing contextual understanding words, they may actually reflect word‐to‐word changes in content without need for narrative‐level part listener. To test this, we recorded electroencephalography from subjects who listened speech presented either its original, narrative form, or after scrambling word order varying amounts. This manipulation comprehend but not recognise individual Neural low‐level acoustic processing were derived each condition using temporal response function. Signatures observed when was unscrambled minimally scrambled understood speech. The same markers absent higher levels as dropped. In contrast, recognition remained high neural measures related envelope tracking did vary significantly across conditions. supports previous claim that based their context listener's those relative It also highlights insensitivity comprehension.

Language: Английский

Citations

23

Neural envelope tracking predicts speech intelligibility and hearing aid benefit in children with hearing loss DOI Open Access
Tilde Van Hirtum, Ben Somers, Benjamin Dieudonné

et al.

Hearing Research, Journal Year: 2023, Volume and Issue: 439, P. 108893 - 108893

Published: Oct. 4, 2023

Language: Английский

Citations

16

Speech onsets and sustained speech contribute differentially to delta and theta speech tracking in auditory cortex DOI

Nikos Chalas,

Christoph Daube, Daniel S. Kluger

et al.

Cerebral Cortex, Journal Year: 2023, Volume and Issue: 33(10), P. 6273 - 6281

Published: Jan. 10, 2023

Abstract When we attentively listen to an individual’s speech, our brain activity dynamically aligns the incoming acoustic input at multiple timescales. Although this systematic alignment between ongoing and speech in auditory areas is well established, events that drive phase-locking are not fully understood. Here, use magnetoencephalographic recordings of 24 human participants (12 females) while they were listening a 1 h story. We show whereas speech–brain coupling associated with sustained fluctuations envelope theta-frequency range (4–7 Hz), tracking low-frequency delta (below Hz) was strongest around onsets like beginning sentence. Crucially, bilateral after onsets, proposing during continuous perception driven by onsets. conclude both components contribute differentially delta- bands, orchestrating sampling speech. Thus, results suggest temporal dissociation acoustically oscillatory tracking, providing valuable implications for orchestration time scales.

Language: Английский

Citations

13

Neural tracking measures of speech intelligibility: Manipulating intelligibility while keeping acoustics unchanged DOI Creative Commons
I. M. Dushyanthi Karunathilake, Joshua P. Kulasingham, Jonathan Z. Simon

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2023, Volume and Issue: 120(49)

Published: Nov. 30, 2023

Neural speech tracking has advanced our understanding of how brains rapidly map an acoustic signal onto linguistic representations and ultimately meaning. It remains unclear, however, intelligibility is related to the corresponding neural responses. Many studies addressing this question vary level by manipulating waveform, but makes it difficult cleanly disentangle effects from underlying acoustical confounds. Here, using magnetoencephalography recordings, we study measures while keeping acoustics strictly unchanged. Acoustically identical degraded stimuli (three-band noise-vocoded, ~20 s duration) are presented twice, second presentation preceded original (nondegraded) version speech. This intermediate priming, which generates a "pop-out" percept, substantially improves passage. We investigate structure affect multivariate temporal response functions (mTRFs). As expected, behavioral results confirm that perceived clarity improved priming. mTRFs analysis reveals auditory (speech envelope onset) not affected priming only (bottom-up driven). Critically, findings suggest segmentation sounds into words emerges with better intelligibility, most strongly at later (~400 ms latency) word processing stage, in prefrontal cortex, line engagement top-down mechanisms associated Taken together, show may provide some objective comprehension.

Language: Английский

Citations

13

Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field DOI Creative Commons
Florine L. Bachmann, Joshua P. Kulasingham, Kasper Eskelund

et al.

Trends in Hearing, Journal Year: 2024, Volume and Issue: 28

Published: Jan. 1, 2024

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, continuous speech presented via earphones have been recently using linear temporal functions (TRFs). Here, we extend earlier studies measuring subcortical in the sound-field, and assess amount data needed estimate TRFs. Electroencephalography (EEG) was recorded from 24 normal participants while they listened clicks stories loudspeakers. Subcortical TRFs were computed after accounting non-linear processing periphery either stimulus rectification or an nerve model. Our results demonstrated that could be reliably measured sound-field. estimated models outperformed simple rectification, 16 minutes sufficient all show clear wave V peaks both sound-field highly consistent earphone conditions, with click ABRs. However, required slightly more (16 minutes) achieve compared (12 minutes), possibly due effects room acoustics. By investigating this study lays groundwork bringing assessment closer real-life may lead improved evaluations smart technologies.

Language: Английский

Citations

4

Exploring Age Differences in Absorption and Enjoyment during Story Listening DOI Creative Commons
Signe Lund Mathiesen, Stephen C. Van Hedger,

Vanessa C. Irsik

et al.

Deleted Journal, Journal Year: 2024, Volume and Issue: 6(2), P. 667 - 684

Published: June 13, 2024

Using naturalistic spoken narratives to investigate speech processes and comprehension is becoming increasingly popular in experimental hearing research. Yet, little known about how individuals engage with story materials listening experiences change age. We investigated absorption the context of stories, explored predictive factors for engagement, examined utility a scale developed written assess auditory materials. Adults aged 20–78 years (N = 216) participated an online study. Participants listened one ten stories intended be engaging different degrees rated terms enjoyment. ages similarly absorbing enjoyable. Further, higher mood scores predicted enjoyment ratings. Factor analysis showed items approximately grouped according original dimensions, suggesting that may similar although certain discriminated less effectively between more or engaging. The present study provides novel insights into adults supports using stimuli

Language: Английский

Citations

4

Neural speech tracking contribution of lip movements predicts behavioral deterioration when the speaker's mouth is occluded DOI Creative Commons
Patrick Reisinger, Marlies Gillis, Nina Suess

et al.

eNeuro, Journal Year: 2025, Volume and Issue: unknown, P. ENEURO.0368 - 24.2024

Published: Jan. 16, 2025

Observing lip movements of a speaker facilitates speech understanding, especially in challenging listening situations. Converging evidence from neuroscientific studies shows stronger neural responses to audiovisual stimuli compared audio-only stimuli. However, the interindividual variability this contribution movement information and its consequences on behavior are unknown. We analyzed source-localized magnetoencephalographic (MEG) 29 normal-hearing participants (12 female) speech, both with without wearing surgical face mask, presence or absence distractor speaker. Using temporal response functions (TRFs) quantify tracking, we show that are, general, enhanced when is challenging. After controlling for acoustics, contribute particularly present. extent visual tracking varied greatly among participants. Probing behavioral relevance, demonstrate individuals who higher terms drop comprehension an increase perceived difficulty mouth occluded by mask. By contrast, no effect was found not occluded. provide novel insights how varies revealing negative absent. Our results also offer potential implications objective assessments perception. Significance Statement In complex auditory environments, simultaneous conversations pose challenge comprehension. investigated level, aid such situations what observing enhances rely more deterioration wears Remarkably, case mask worn findings reveal differences applications

Language: Английский

Citations

0

EEG reveals brain network alterations in chronic aphasia during natural speech listening DOI Creative Commons
Ramtin Mehraram, Jill Kries, Pieter De Clercq

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Jan. 19, 2025

Aphasia is a common consequence of stroke which affects language processing. In search an objective biomarker for aphasia, we used EEG to investigate how functional network patterns in the cortex are affected persons with post-stroke chronic aphasia (PWA) compared healthy controls (HC) while they listening story. was recorded from 22 HC and 27 PWA listened 25-min-long Functional connectivity between scalp regions measured weighted phase lag index. The Network-Based Statistics toolbox detect altered correlations behavioural tests within group. Differences geometry were assessed by means graph theory targeted node-attack approach. Group-classification accuracy obtained support vector machine classifier. showed stronger inter-hemispheric theta-band (4.5–7 Hz), whilst weaker subnetwork emerged low-gamma band (30.5–49 Hz). Two subnetworks correlated semantic fluency respectively delta- (1–4 Hz) low-gamma-bands. network, alterations at both local global level, only changes found low-gamma-band network. Network metrics discriminated AUC = 83%. Overall, demonstrate potential EEG-network development informative biomarkers assess natural speech processing aphasia. We hypothesize that detected reflect compensatory mechanisms associated recovery.

Language: Английский

Citations

0

Resilience and vulnerability of neural speech tracking after hearing restoration DOI Creative Commons
Alessandra Federici, Marta Fantoni, Francesco Pavani

et al.

Communications Biology, Journal Year: 2025, Volume and Issue: 8(1)

Published: March 1, 2025

The role of early auditory experience in the development neural speech tracking remains an open question. To address this issue, we measured children with or without functional hearing during their first year life after was restored cochlear implants (CIs), as well controls (HC). Neural CIs is unaffected by absence perinatal experience. CI users and HC exhibit a similar magnitude at short timescales brain activity. However, delayed users, its timing depends on age restoration. Conversely, longer timescales, dampened participants using CIs, thereby accounting for comprehension deficits. These findings highlight resilience sensory processing while also demonstrating vulnerability higher-level to lack shows that phase loss affects differently. Tracking present but weaker ones, impacting comprehension.

Language: Английский

Citations

0