CoVox: A dataset of contrasting vocalizations DOI Creative Commons
Camila Bruder, Pauline Larrouy-Maestri

Behavior Research Methods, Journal Year: 2025, Volume and Issue: 57(5)

Published: April 11, 2025

Abstract The human voice is remarkably versatile and can vary greatly in sound depending on how it used. An increasing number of studies have addressed the differences similarities between singing speaking voice. However, finding adequate stimuli material that at same time controlled ecologically valid challenging, most datasets lack variability terms vocal styles performed by Here, we describe a curated stimulus set vocalizations where 22 female singers melody excerpts three contrasting (as lullaby, as pop song, an opera aria) spoke text aloud two if to adult or infant). All productions were made with songs’ original lyrics, Brazilian Portuguese, a/lu/sound. This dataset 1320 was validated through forced-choice lab experiment ( N = 25 for each stimulus) lay listeners could recognize intended vocalization style high accuracy (proportion correct recognition superior 69% all styles). We also provide acoustic characterization stimuli, depicting clear profiles vocalization. recordings are freely available under Creative Commons license be downloaded https://osf.io/cgexn/ .

Language: Английский

The language network as a natural kind within the broader landscape of the human brain DOI
Evelina Fedorenko, Anna A. Ivanova, Tamar I. Regev

et al.

Nature reviews. Neuroscience, Journal Year: 2024, Volume and Issue: 25(5), P. 289 - 312

Published: April 12, 2024

Language: Английский

Citations

71

The human language system, including its inferior frontal component in “Broca’s area,” does not support music perception DOI
Xuanyi Chen, Josef Affourtit, Rachel Ryskin

et al.

Cerebral Cortex, Journal Year: 2023, Volume and Issue: 33(12), P. 7904 - 7929

Published: April 1, 2023

Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially structure processing. Such claims often concern the inferior frontal component of language system located within "Broca's area." However, others failed to find overlap. Using a robust individual-subject fMRI approach, we examined responses brain regions stimuli, probed musical abilities individuals with severe aphasia. Across 4 experiments, obtained clear answer: perception does not engage system, judgments about possible even presence damage network. In particular, regions' generally low, below fixation baseline, never exceed elicited by nonmusic auditory conditions, like animal sounds. Furthermore, sensitive structure: they show low both intact structure-scrambled music, melodies vs. without structural violations. Finally, line past patient investigations, aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, mechanisms that process do appear including syntax.

Language: Английский

Citations

45

Language is primarily a tool for communication rather than thought DOI
Evelina Fedorenko, Steven T. Piantadosi,

Edward Gibson

et al.

Nature, Journal Year: 2024, Volume and Issue: 630(8017), P. 575 - 586

Published: June 19, 2024

Language: Английский

Citations

37

Spectro-temporal acoustical markers differentiate speech from song across cultures DOI Creative Commons
Philippe Albouy, Samuel A. Mehr, Roxane S. Hoyer

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: June 6, 2024

Abstract Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns vocalizations produced by 369 people living in 21 urban, rural, small-scale societies across six continents. Specific ranges spectral temporal modulations, overlapping within categories societies, significantly differentiate from Machine-learning classification shows that this effect cross-culturally robust, being classified solely their all societies. Listeners unfamiliar with cultures classify using similar cues as machine learning algorithm. Finally, are better able to discriminate song than a broad range other variables, suggesting modulation—a key feature auditory neuronal tuning—accounts for fundamental difference between categories.

Language: Английский

Citations

23

A highly selective response to food in human visual cortex revealed by hypothesis-free voxel decomposition DOI Creative Commons
Meenakshi Khosla, N. Apurva Ratan Murty, Nancy Kanwisher

et al.

Current Biology, Journal Year: 2022, Volume and Issue: 32(19), P. 4159 - 4171.e9

Published: Aug. 25, 2022

Language: Английский

Citations

50

Music can be reconstructed from human auditory cortex activity using nonlinear decoding models DOI Creative Commons
Ludovic Bellier, Anaïs Llorens, Déborah Marciano

et al.

PLoS Biology, Journal Year: 2023, Volume and Issue: 21(8), P. e3002176 - e3002176

Published: Aug. 15, 2023

Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened Pink Floyd song and applied stimulus reconstruction approach previously used in speech domain. successfully reconstructed recognizable from direct recordings quantified impact different factors on decoding accuracy. Combining encoding analyses, we found right-hemisphere dominance for with primary role superior temporal gyrus (STG), evidenced new STG subregion tuned musical rhythm, defined an anterior-posterior organization exhibiting sustained onset responses elements. Our findings show feasibility applying predictive modeling short datasets acquired single patients, paving way adding elements brain-computer interface (BCI) applications.

Language: Английский

Citations

29

Does the visual word form area split in bilingual readers? A millimeter-scale 7-T fMRI study DOI Creative Commons
Minye Zhan, Christophe Pallier, Aakash Agrawal

et al.

Science Advances, Journal Year: 2023, Volume and Issue: 9(14)

Published: April 5, 2023

In expert readers, a brain region known as the visual word form area (VWFA) is highly sensitive to written words, exhibiting posterior-to-anterior gradient of increasing sensitivity orthographic stimuli whose statistics match those real words. Using high-resolution 7-tesla functional magnetic resonance imaging (fMRI), we ask whether, in bilingual distinct cortical patches specialize for different languages. 21 English-French bilinguals, unsmoothed 1.2-millimeters fMRI revealed that VWFA actually composed several small selective reading, with word-similarity gradient, but near-complete overlap between two 10 English-Chinese however, while most word-specific exhibited similar reading specificity and gradients Chinese English, additional responded specifically writing and, unexpectedly, faces. Our results show acquisition multiple systems can indeed tune cortex differently sometimes leading emergence specialized single language.

Language: Английский

Citations

27

Universality, domain-specificity and development of psychological responses to music DOI
Manvir Singh, Samuel A. Mehr

Nature Reviews Psychology, Journal Year: 2023, Volume and Issue: 2(6), P. 333 - 346

Published: May 17, 2023

Language: Английский

Citations

25

Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions DOI Creative Commons
Greta Tuckute, Jenelle Feather, Dana Boebinger

et al.

PLoS Biology, Journal Year: 2023, Volume and Issue: 21(12), P. e3002366 - e3002366

Published: Dec. 13, 2023

Models that predict brain responses to stimuli provide one measure of understanding a sensory system and have many potential applications in science engineering. Deep artificial neural networks emerged as the leading such predictive models visual but are less explored audition. Prior work provided examples audio-trained produced good predictions auditory cortical fMRI exhibited correspondence between model stages regions, left it unclear whether these results generalize other network and, thus, how further improve this domain. We evaluated model-brain for publicly available audio along with in-house trained on 4 different tasks. Most tested outpredicted standard spectromporal filter-bank cortex systematic correspondence: Middle best predicted primary cortex, while deep non-primary cortex. However, some state-of-the-art substantially worse predictions. recognize speech background noise better than quiet, potentially because hearing imposes constraints biological representations. The training task influenced prediction quality specific tuning properties, overall resulting from multiple generally support promise audition, though they also indicate current do not explain their entirety.

Language: Английский

Citations

25

A unifying framework for functional organization in early and higher ventral visual cortex DOI
Eshed Margalit,

Hyodong Lee,

Dawn Finzi

et al.

Neuron, Journal Year: 2024, Volume and Issue: 112(14), P. 2435 - 2451.e7

Published: May 10, 2024

Language: Английский

Citations

14