Pleasure and Beyond DOI
Robert J. Zatorre

Oxford University Press eBooks, Journal Year: 2023, Volume and Issue: unknown, P. 260 - 288

Published: Nov. 23, 2023

Abstract Most people report that music reliably generates emotions. Emotional arousal can be traced to the interaction between mechanisms involved in perception, memory, and other cognitive functions with striatum, amygdala, limbic structures. Several factors are associated portraying conveying emotion, including social aspects, movement cues, vocal features, roughness or dissonance, memory. Preference for specific musical pieces styles is strongly influenced by emotional experienced during adolescence, which linked a dopaminergic surge striatum time of life. Music used self-regulation via psychological mechanisms, such as reappraisal, seem involve top-down modulation from frontal cortex onto amygdala.

Language: Английский

Beyond speech: Exploring diversity in the human voice DOI Creative Commons
Andrey Anikin, Valentina Cartei, Katarzyna Pisanski

et al.

iScience, Journal Year: 2023, Volume and Issue: 26(11), P. 108204 - 108204

Published: Oct. 14, 2023

Humans have evolved voluntary control over vocal production for speaking and singing, while preserving the phylogenetically older system of spontaneous nonverbal vocalizations such as laughs screams. To test systematic acoustic differences between these domains, we analyzed a broad, cross-cultural corpus representing 2 h speech, vocalizations. We show that, speech is relatively low-pitched tonal with mostly regular phonation, singing especially vary enormously in pitch often display harsh-sounding, irregular phonation owing to nonlinear phenomena. The evolution complex supralaryngeal articulatory spectro-temporal modulation has been critical yet not significantly constrained laryngeal source modulation. In contrast, articulation very limited vocalizations, which predominantly contain minimally articulated open vowels rapid temporal roughness range. infer that works best conveying affect, filter mainly facilitates semantic communication.

Language: Английский

Citations

20

Biological principles for music and mental health DOI Creative Commons
Daniel L. Bowling

Translational Psychiatry, Journal Year: 2023, Volume and Issue: 13(1)

Published: Dec. 4, 2023

Abstract Efforts to integrate music into healthcare systems and wellness practices are accelerating but the biological foundations supporting these initiatives remain underappreciated. As a result, music-based interventions often sidelined in medicine. Here, I bring together advances research from neuroscience, psychology, psychiatry bridge music’s specific human biology with its therapeutic applications. The framework propose organizes neurophysiological effects of around four core elements musicality: tonality, rhythm, reward, sociality. For each, review key concepts, bases, evidence clinical benefits. Within this framework, outline strategy increase impact on health based standardizing treatments their alignment individual differences responsivity musical elements. that an integrated understanding musicality—describing each element’s functional origins, development, phylogeny, neural bases—is critical advancing rational applications mental wellness.

Language: Английский

Citations

16

From Perception to Pleasure DOI
Robert J. Zatorre

Oxford University Press eBooks, Journal Year: 2023, Volume and Issue: unknown

Published: Nov. 23, 2023

Abstract How does perception of abstract tonal patterns—music—lead to the pleasure we experience from these sounds? The answer presented in this book is that music arises interactions between cortical loops enable processing sound patterns and subcortical circuits responsible for reward valuation. auditory cortex its ventral-stream connections encode acoustical features their relationships, maintain them working memory, form internal representations statistical which predictions are made about how evolve time. Disruption pathway leads amusia. dorsal stream allows sensory-motor transformations, production, metrical representation, leading when events will occur. These predictive processes play a central role creating expectancies musical transmitted dopaminergic system, where hedonic responses generated according well an event fits with predictions. linked balance predictability surprise patterns. perceptual systems anhedonia. Engagement system also related movement vocal cues, social factors, preference, emotion regulation.

Language: Английский

Citations

11

Audio‐visual concert performances synchronize audience's heart rates DOI Creative Commons
Anna Czepiel, Lauren K. Fink, Mathias Scharinger

et al.

Annals of the New York Academy of Sciences, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 3, 2025

Abstract People enjoy engaging with music. Live music concerts provide an excellent option to investigate real‐world experiences, and at the same time, use neurophysiological synchrony assess dynamic engagement. In current study, we assessed engagement in a live concert setting using of cardiorespiratory measures, comparing inter‐subject, stimulus–response, correlation, phase coherence. As might be enhanced by seeing musicians perform, presented audiences audio‐only (AO) audio‐visual (AV) piano performances. Only correlation measures were above chance level. time‐averaged across conditions, AV performances evoked higher inter‐subject heart rate (ISC‐HR). However, averaged pieces did not correspond self‐reported On other hand, time‐resolved analyses show that synchronized deceleration‐acceleration (HR) patterns, typical “orienting response” (an index directed attention), occurred within salient events section boundaries. That is, perform heightened audience structurally important moments Western classical Overall, could multisensory information shapes By different further highlight advantages time series analysis, specifically ISC‐HR, as robust measure holistic musical listening experiences naturalistic settings.

Language: Английский

Citations

0

CoVox: A dataset of contrasting vocalizations DOI Creative Commons
Camila Bruder, Pauline Larrouy-Maestri

Behavior Research Methods, Journal Year: 2025, Volume and Issue: 57(5)

Published: April 11, 2025

Abstract The human voice is remarkably versatile and can vary greatly in sound depending on how it used. An increasing number of studies have addressed the differences similarities between singing speaking voice. However, finding adequate stimuli material that at same time controlled ecologically valid challenging, most datasets lack variability terms vocal styles performed by Here, we describe a curated stimulus set vocalizations where 22 female singers melody excerpts three contrasting (as lullaby, as pop song, an opera aria) spoke text aloud two if to adult or infant). All productions were made with songs’ original lyrics, Brazilian Portuguese, a/lu/sound. This dataset 1320 was validated through forced-choice lab experiment ( N = 25 for each stimulus) lay listeners could recognize intended vocalization style high accuracy (proportion correct recognition superior 69% all styles). We also provide acoustic characterization stimuli, depicting clear profiles vocalization. recordings are freely available under Creative Commons license be downloaded https://osf.io/cgexn/ .

Language: Английский

Citations

0

Virtual Universals and Creativity: A New Approach to Music Cognition DOI Creative Commons

Gavin Steingo,

Asif A. Ghazanfar

Music & Science, Journal Year: 2025, Volume and Issue: 8

Published: May 1, 2025

If music is so varied, how do we understand it? Is there anything universal about And if not, can be a cognitive science of Radically limiting examples they fit certain frameworks but then calling everything else an exception not helpful. We propose redefinition that based on specific features rather as creative experimentation with what term “virtual universals.” These are universals exert force even when actualized or sounded. Our argument has applicability beyond the domain music; in principle, ideas this paper could applied to any human behavior.

Language: Английский

Citations

0

Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music DOI Creative Commons
Alice Vivien Barchet, Molly J. Henry, Claire Pelofi

et al.

Communications Psychology, Journal Year: 2024, Volume and Issue: 2(1)

Published: Jan. 3, 2024

Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant structure. We investigate influence of different motor effectors on rate-specific processing both domains. A perception a synchronization task involving syllable piano tone sequences typically associated with speech (whispering) (finger-tapping) were tested at slow (~2 Hz) fast rates (~4.5 Hz). Although performance was generally better rates, exhibited rate preferences. Finger-tapping advantaged compared whispering but not faster being effector-dependent slow, highly correlated rates. Perception predicted by general finger-tapping component. Our data suggests partially independent for music, possibly differential recruitment cortical circuitry.

Language: Английский

Citations

1

Diagnostic accuracy of deep learning using speech samples in depression: a systematic review and meta-analysis DOI Creative Commons
Lidan Liu, Lu Liu, Hatem A Wafa

et al.

Journal of the American Medical Informatics Association, Journal Year: 2024, Volume and Issue: 31(10), P. 2394 - 2404

Published: July 16, 2024

Abstract Objective This study aims to conduct a systematic review and meta-analysis of the diagnostic accuracy deep learning (DL) using speech samples in depression. Materials Methods included studies reporting results DL algorithms depression data, published from inception January 31, 2024, on PubMed, Medline, Embase, PsycINFO, Scopus, IEEE, Web Science databases. Pooled accuracy, sensitivity, specificity were obtained by random-effect models. The Precision Study Quality Assessment Tool (QUADAS-2) was used assess risk bias. Results A total 25 met inclusion criteria 8 them meta-analysis. pooled estimates specificity, sensitivity for detection models 0.87 (95% CI, 0.81-0.93), 0.85 0.78-0.91), 0.82 0.71-0.94), respectively. When stratified model structure, highest 0.89 0.81-0.97) handcrafted group. Discussion To our knowledge, is first performance samples. All convolutional neural network (CNN) models, posing problems deciphering other algorithms. performed better than end-to-end detection. Conclusions application provided useful tool CNN with acoustic features could help improve performance. Protocol registration protocol registered PROSPERO (CRD42023423603).

Language: Английский

Citations

1

Appreciation of singing and speaking voices is highly idiosyncratic DOI Creative Commons
Camila Bruder, Klaus Frieler, Pauline Larrouy-Maestri

et al.

Royal Society Open Science, Journal Year: 2024, Volume and Issue: 11(11)

Published: Nov. 1, 2024

Voice preferences are an integral part of interpersonal interactions and shape how people connect with each other. While a large number studies have investigated the mechanisms behind (speaking) voice attractiveness, very little research was dedicated to other types vocalizations. In this Registered Report, we proposed investigate integrative approach. To end, used newly recorded validated stimulus set contrasting vocalizations by 22 highly trained female singers speaking singing same material (in Brazilian Portuguese) in styles (sung as lullaby, pop song or opera aria; spoken aloud if directed adult audience infant). We asked 62 participants rate these terms much they liked them; compared amount shared taste (that is, agreed their preferences) across styles. found idiosyncratic all Our predictions concerning were not confirmed: although higher for lullaby than singing, it unexpectedly operatic infant-directed adult-directed speech. Conversely, our prediction limited consistency average some confirmed, contradicting sexual selection-based ideas ‘backup’ signals individual fitness. findings draw attention role differences highlight need broader approach understanding underlying preferences. Stage 1 recommendation review history: https://rr.peercommunityin.org/articles/rec?id=357 . 2 https://rr.peercommunityin.org/articles/rec?id=802

Language: Английский

Citations

1

Spectrotemporal cues and attention jointly modulate fMRI network topology for sentence and melody perception DOI Creative Commons
Felix Haiduk, Robert J. Zatorre, Lucas Benjamin

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: March 6, 2024

Abstract Speech and music are two fundamental modes of human communication. Lateralisation key processes underlying their perception has been related both to the distinct sensitivity low-level spectrotemporal acoustic features top-down attention. However, interplay between bottom-up needs be clarified. In present study, we investigated contribution acoustics attention melodies or sentences lateralisation in fMRI functional network topology. We used sung speech stimuli selectively filtered temporal spectral modulation domains with crossed balanced verbal melodic content. Perception decreased degradation information, whereas degradation. Applying graph theoretical metrics on connectivity matrices, found that local clustering, reflecting specialisation, linearly increased when cues crucial for task goal were incrementally degraded. These effects occurred a bilateral fronto-temporo-parietal processing temporally degraded right auditory regions spectrally melodies. contrast, global topology remained stable across conditions. findings suggest partially depends an goals under attentional demands.

Language: Английский

Citations

1