Complexity in speech and music listening via neural manifold flows DOI Creative Commons

Claudio Runfola,

M. Neri,

Daniele Schön

et al.

Network Neuroscience, Journal Year: 2024, Volume and Issue: 9(1), P. 146 - 158

Published: Nov. 21, 2024

Understanding the complex neural mechanisms underlying speech and music perception remains a multifaceted challenge. In this study, we investigated dynamics using human intracranial recordings. Employing novel approach based on low-dimensional reduction techniques, Manifold Density Flow (MDF), quantified complexity of brain during naturalistic listening resting state. Our results reveal higher in patterns interdependence between different regions compared with rest, suggesting that cognitive demands drive toward states not observed rest. Moreover, has more than music, highlighting nuanced differences these two auditory domains. Additionally, validated efficacy MDF method through experimentation toy model its effectiveness capturing induced by tasks another established technique literature. Overall, our findings provide new to quantify activity studying temporal evolution manifold, insights are invisible traditional methodologies contexts perception.

Language: Английский

The human language system, including its inferior frontal component in “Broca’s area,” does not support music perception DOI
Xuanyi Chen, Josef Affourtit, Rachel Ryskin

et al.

Cerebral Cortex, Journal Year: 2023, Volume and Issue: 33(12), P. 7904 - 7929

Published: April 1, 2023

Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially structure processing. Such claims often concern the inferior frontal component of language system located within "Broca's area." However, others failed to find overlap. Using a robust individual-subject fMRI approach, we examined responses brain regions stimuli, probed musical abilities individuals with severe aphasia. Across 4 experiments, obtained clear answer: perception does not engage system, judgments about possible even presence damage network. In particular, regions' generally low, below fixation baseline, never exceed elicited by nonmusic auditory conditions, like animal sounds. Furthermore, sensitive structure: they show low both intact structure-scrambled music, melodies vs. without structural violations. Finally, line past patient investigations, aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, mechanisms that process do appear including syntax.

Language: Английский

Citations

45

Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music DOI Creative Commons
Alice Vivien Barchet, Molly J. Henry, Claire Pelofi

et al.

Communications Psychology, Journal Year: 2024, Volume and Issue: 2(1)

Published: Jan. 3, 2024

Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant structure. We investigate influence of different motor effectors on rate-specific processing both domains. A perception a synchronization task involving syllable piano tone sequences typically associated with speech (whispering) (finger-tapping) were tested at slow (~2 Hz) fast rates (~4.5 Hz). Although performance was generally better rates, exhibited rate preferences. Finger-tapping advantaged compared whispering but not faster being effector-dependent slow, highly correlated rates. Perception predicted by general finger-tapping component. Our data suggests partially independent for music, possibly differential recruitment cortical circuitry.

Language: Английский

Citations

1

The human auditory cortex concurrently tracks syllabic and phonemic timescales via acoustic spectral flux DOI Creative Commons
J.P. Giroud, Agnès Trébuchon, Manuel Mercier

et al.

Science Advances, Journal Year: 2024, Volume and Issue: 10(51)

Published: Dec. 20, 2024

Dynamical theories of speech processing propose that the auditory cortex parses acoustic information in parallel at syllabic and phonemic timescales. We developed a paradigm to independently manipulate both linguistic timescales, acquired intracranial recordings from 11 patients who are epileptic listening French sentences. Our results indicate (i) timescales reflected spectral flux; (ii) during comprehension, tracks timescale theta range, while neural activity alpha-beta range phase locks timescale; (iii) these dynamics occur simultaneously share joint spatial location; (iv) flux embeds two timescales-in low-beta ranges-across 17 natural languages. These findings help us understand how human brain extracts continuous signal multiple simultaneously, prerequisite for subsequent processing.

Language: Английский

Citations

1

Spectrotemporal cues and attention jointly modulate fMRI network topology for sentence and melody perception DOI Creative Commons
Felix Haiduk, Robert J. Zatorre, Lucas Benjamin

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: March 6, 2024

Abstract Speech and music are two fundamental modes of human communication. Lateralisation key processes underlying their perception has been related both to the distinct sensitivity low-level spectrotemporal acoustic features top-down attention. However, interplay between bottom-up needs be clarified. In present study, we investigated contribution acoustics attention melodies or sentences lateralisation in fMRI functional network topology. We used sung speech stimuli selectively filtered temporal spectral modulation domains with crossed balanced verbal melodic content. Perception decreased degradation information, whereas degradation. Applying graph theoretical metrics on connectivity matrices, found that local clustering, reflecting specialisation, linearly increased when cues crucial for task goal were incrementally degraded. These effects occurred a bilateral fronto-temporo-parietal processing temporally degraded right auditory regions spectrally melodies. contrast, global topology remained stable across conditions. findings suggest partially depends an goals under attentional demands.

Language: Английский

Citations

1

Complexity in speech and music listening via neural manifold flows DOI Creative Commons

Claudio Runfola,

M. Neri,

Daniele Schön

et al.

Network Neuroscience, Journal Year: 2024, Volume and Issue: 9(1), P. 146 - 158

Published: Nov. 21, 2024

Understanding the complex neural mechanisms underlying speech and music perception remains a multifaceted challenge. In this study, we investigated dynamics using human intracranial recordings. Employing novel approach based on low-dimensional reduction techniques, Manifold Density Flow (MDF), quantified complexity of brain during naturalistic listening resting state. Our results reveal higher in patterns interdependence between different regions compared with rest, suggesting that cognitive demands drive toward states not observed rest. Moreover, has more than music, highlighting nuanced differences these two auditory domains. Additionally, validated efficacy MDF method through experimentation toy model its effectiveness capturing induced by tasks another established technique literature. Overall, our findings provide new to quantify activity studying temporal evolution manifold, insights are invisible traditional methodologies contexts perception.

Language: Английский

Citations

0