Visual speech enhances auditory onset timing and envelope tracking through distinct mechanisms DOI Creative Commons
Cody Zhewei Cao, William C. Stacey, Vibhangini S. Wasade

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 24, 2024

Seeing the face of a speaker facilitates speech recognition in challenging listening environments. Prior work has shown that visual contains timing information to aid auditory processing, yet how these signals are integrated within system during audiovisual perception remains poorly understood. Observation preparatory mouth movements may initiate phase reset intrinsic oscillations, potentially sensitizing for receptive while observation post onset facilitate entrainment envelope. Yet, little been done test whether enhances encoding onset, envelope tracking, or both, and through independent overlapping mechanisms. To investigate this, we examined ways which alters theta band power using human intracranial electroencephalography (iEEG) recordings large group patients with epilepsy (n = 21). elicited (increased inter-trial coherence; ITPC) throughout superior temporal gyrus (STG), is thought enhance encoding. Following modulated ITPC only at anterior STG electrodes was posterior electrodes. Pre- post-speech were spatially temporally dissociated, consistent hypothesis tracking mechanisms partially distinct. Crucially, congruent incongruent speech, designed here have identical about time, but different evolution, produced small difference oscillations STG, highlighting more restricted role ongoing entrainment. These results support improves precision two separate mechanisms, encoded entire STG.

Language: Английский

Imaging the dancing brain: Decoding sensory, motor and social processes during dyadic dance DOI Creative Commons
Félix Bigand, Roberta Bianco, Sara F. Abalde

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 17, 2024

Abstract Real-world social cognition requires processing and adapting to multiple dynamic information streams. Interpreting neural activity in such ecological conditions remains a key challenge for neuroscience. This study leverages advancements de-noising techniques multivariate modeling extract interpretable EEG signals from pairs of participants engaged spontaneous dyadic dance. Using temporal response functions (mTRFs), we investigated how music acoustics, self-generated kinematics, other-generated coordination each uniquely contributed activity. Electromyogram recordings ocular, face, neck muscles were also modelled control muscle artifacts. The mTRFs effectively disentangled associated with four processes: (I) auditory tracking music, (II) movements, (III) visual monitoring partner (IV) accuracy. We show that the first three are driven by event-related potentials: P50-N100-P200 triggered acoustic events, central lateralized readiness potential movement initiation, occipital N170 observation. Notably, (previously unknown) marker encodes spatiotemporal alignment between dancers, surpassing encoding self-or partner-related kinematics taken alone. emerges when partners make contact, relies on cortical areas, is specifically observation rather than initiation. data-driven kinematic decomposition, further vertical movements best drive observers’ These findings highlight real-world neuroimaging, combined modelling, uncover mechanisms underlying complex yet natural behaviors. Significance statement brain function involves integrating streams simultaneously. However, due shortfall computational methods, laboratory-based neuroscience often examines processes isolation. modelling data freely dancing demonstrate it possible tease apart physiologically-established perception, motor produced dance partner. Crucially, identify previously unknown accuracy beyond contributions biological behaviors, advancing our understanding supports interactive activities.

Language: Английский

Citations

3

Visual speech enhances auditory onset timing and envelope tracking through distinct mechanisms DOI Creative Commons
Cody Zhewei Cao, William C. Stacey, Vibhangini S. Wasade

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 24, 2024

Seeing the face of a speaker facilitates speech recognition in challenging listening environments. Prior work has shown that visual contains timing information to aid auditory processing, yet how these signals are integrated within system during audiovisual perception remains poorly understood. Observation preparatory mouth movements may initiate phase reset intrinsic oscillations, potentially sensitizing for receptive while observation post onset facilitate entrainment envelope. Yet, little been done test whether enhances encoding onset, envelope tracking, or both, and through independent overlapping mechanisms. To investigate this, we examined ways which alters theta band power using human intracranial electroencephalography (iEEG) recordings large group patients with epilepsy (n = 21). elicited (increased inter-trial coherence; ITPC) throughout superior temporal gyrus (STG), is thought enhance encoding. Following modulated ITPC only at anterior STG electrodes was posterior electrodes. Pre- post-speech were spatially temporally dissociated, consistent hypothesis tracking mechanisms partially distinct. Crucially, congruent incongruent speech, designed here have identical about time, but different evolution, produced small difference oscillations STG, highlighting more restricted role ongoing entrainment. These results support improves precision two separate mechanisms, encoded entire STG.

Language: Английский

Citations

0