Robustness of a pacemaker to control chaotic oscillations in a two-mass model of the vocal folds under turbulence and muscle twitch fluctuations and vocal tremor DOI
Oriol Guasch

Communications in Nonlinear Science and Numerical Simulation, Journal Year: 2024, Volume and Issue: 140, P. 108361 - 108361

Published: Sept. 24, 2024

Language: Английский

Spectro-temporal acoustical markers differentiate speech from song across cultures DOI Creative Commons
Philippe Albouy, Samuel A. Mehr, Roxane S. Hoyer

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: June 6, 2024

Abstract Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns vocalizations produced by 369 people living in 21 urban, rural, small-scale societies across six continents. Specific ranges spectral temporal modulations, overlapping within categories societies, significantly differentiate from Machine-learning classification shows that this effect cross-culturally robust, being classified solely their all societies. Listeners unfamiliar with cultures classify using similar cues as machine learning algorithm. Finally, are better able to discriminate song than a broad range other variables, suggesting modulation—a key feature auditory neuronal tuning—accounts for fundamental difference between categories.

Language: Английский

Citations

23

Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report DOI Creative Commons
Yuto Ozaki, Adam Tierney, Peter Q. Pfordresher

et al.

Science Advances, Journal Year: 2024, Volume and Issue: 10(20)

Published: May 15, 2024

Both music and language are found in all known human societies, yet no studies have compared similarities differences between song, speech, instrumental on a global scale. In this Registered Report, we analyzed two datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational melodies from our 75 coauthors speaking 55 languages; (ii) 418 previously published adult-directed song speech 209 individuals 16 languages. Of six preregistered predictions, five were strongly supported: Relative to songs use higher pitch, slower temporal rate, (iii) more stable pitches, while both used similar (iv) pitch interval size (v) timbral brightness. Exploratory analyses suggest that features vary along "musi-linguistic" continuum when including lyrics. Our study provides strong empirical evidence cross-cultural regularities speech.

Language: Английский

Citations

21

Exploring nonlinear phenomena in animal vocalizations through oscillator theory DOI Creative Commons
Marta del Olmo, Christoph Schmal, Hanspeter Herzel

et al.

Philosophical Transactions of the Royal Society B Biological Sciences, Journal Year: 2025, Volume and Issue: 380(1923)

Published: April 3, 2025

Animal vocalizations comprise a rich array of complex sounds that exhibit nonlinear phenomena (NLP), which have fascinated researchers for decades. From the melodic songs birds to clicks and whistles dolphins, many species been found produce vocalizations, offering valuable perspective on mechanisms underlying sound production potential adaptive functions. By leveraging principles oscillator theory dynamics, animal are based coupled oscillators, can be described conveniently classified. We review basic ingredients self-sustained oscillations how different NLP emerge. discuss important terms in context theory: attractor types, phase space, bifurcations Arnold tongue diagrams. Through comparative analysis observed bifurcation diagrams, our study reviews tools dynamics provide insights into intricate complexity as well evolutionary pressures strategies shaped diverse communication systems kingdom. This article is part theme issue, ‘Nonlinear vertebrate vocalizations: communicative functions’.

Language: Английский

Citations

7

Nonlinear acoustic phenomena affect the perception of pain in human baby cries DOI
Siloé Corvin, Mathilde Massenet,

Angélique Hardy

et al.

Philosophical Transactions of the Royal Society B Biological Sciences, Journal Year: 2025, Volume and Issue: 380(1923)

Published: April 3, 2025

What makes the painful cries of human babies so difficult to ignore? Vocal traits known as 'nonlinear phenomena’ are prime candidates. These acoustic irregularities common in babies’ and typically associated with high levels distress or pain. Despite vital importance for a baby’s survival, how these nonlinear phenomena drive pain perception adult listeners has not previously been systematically investigated. Here, by combining analyses recorded different contexts playback experiments using natural synthetic cries, we show that baby expressing acute characterized pronounced presence phenomena, evaluation listeners. While rated all presenting any more pain, they were particularly sensitive chaos. Our results thus especially chaos, encode information may be critically helpful development vocal-based tools monitoring babies' needs context paediatric care. This article is part theme issue ‘Nonlinear vertebrate vocalizations: mechanisms communicative functions’.

Language: Английский

Citations

7

‘Monkey yodels’—frequency jumps in New World monkey vocalizations greatly surpass human vocal register transitions DOI Creative Commons
Christian T. Herbst, Isao T. Tokuda, Takeshi Nishimura

et al.

Philosophical Transactions of the Royal Society B Biological Sciences, Journal Year: 2025, Volume and Issue: 380(1923)

Published: April 3, 2025

We investigated the causal basis of abrupt frequency jumps in a unique database New World monkey vocalizations. used combination acoustic and electroglottographic recordings vivo , excised larynx investigations vocal fold dynamics, computational modelling. particularly attended to contribution membranes: thin upward extensions folds found most primates but absent humans. In three six species, we observed two distinct modes vibration. The first, involving vibration alone, produced low-frequency oscillations, is analogous that underlying human phonation. second, incorporating membranes, resulted much higher-frequency oscillation. Abrupt fundamental shifts were all datasets. While these data are reminiscent rapid transitions certain singing styles (e.g. yodelling), considerably larger nonhuman studied. Our suggest peripheral modifications anatomy provide an important source variability complexity repertoires primates. further propose call repertoire crucially related species’ ability vocalize with different laryngeal mechanisms, registers. This article part theme issue ‘Nonlinear phenomena vertebrate vocalizations: mechanisms communicative functions’.

Language: Английский

Citations

6

How to analyse and manipulate nonlinear phenomena in voice recordings DOI Creative Commons
Andrey Anikin, Christian T. Herbst

Philosophical Transactions of the Royal Society B Biological Sciences, Journal Year: 2025, Volume and Issue: 380(1923)

Published: April 3, 2025

We address two research applications in this methodological review: starting from an audio recording, the goal may be to characterize nonlinear phenomena (NLP) at level of voice production or test their perceptual effects on listeners. A crucial prerequisite for work is ability detect NLP acoustic signals, which can then correlated with biologically relevant information about caller and listeners’ reaction. are often annotated manually, but labour-intensive not very reliable, although we describe potentially helpful advanced visualization aids such as reassigned spectrograms phasegrams. Objective features also useful, including general descriptives (harmonics-to-noise ratio, cepstral peak prominence, vocal roughness), statistics derived dynamics (correlation dimension) NLP-specific measures (depth modulation subharmonics). On perception side, playback studies greatly benefit tools directly manipulating recordings. Adding frequency jumps, amplitude subharmonics relatively straightforward. Creating biphonation, imitating chaos removing a recording more challenging, feasible parametric synthesis. most promising algorithms analysing provide detailed examples files R code supplementary material. This article part theme issue ‘Nonlinear vertebrate vocalizations: mechanisms communicative functions’.

Language: Английский

Citations

5

Applying nonlinear dynamics to the voice: a historical perspective DOI Creative Commons
W. Tecumseh Fitch

Philosophical Transactions of the Royal Society B Biological Sciences, Journal Year: 2025, Volume and Issue: 380(1923)

Published: April 3, 2025

The recognition that nonlinear phenomena, including subharmonics, bifurcations and deterministic chaos, are present in human animal vocalizations is a relatively recent one. I give brief history of this revolution our understanding the voice, based on interviews with some key players personal experience. Most concepts mathematical principles dynamics were already well worked out early 1980s. In 1990s, physicist Hanspeter Herzel colleagues Berlin recognized these applicable to initially baby cries. physics physiology underlying many phenomena had remained mysterious up until then. This insight was later generalized vocalizations. Nonlinear play peripheral role most vocal communication but common feature broad existence vocalizations, quantitative study their production perception, has now fuelled important exciting advances communication. concentrate how core came into focus, initial application an ever-wider circle call types species, end prospectus for future. article part theme issue ‘Nonlinear vertebrate vocalizations: mechanisms communicative functions’.

Language: Английский

Citations

4

Nonlinear vocal phenomena and speech intelligibility. DOI
Andrey Anikin, David Reby, Katarzyna Pisanski

et al.

PubMed, Journal Year: 2025, Volume and Issue: 380(1923), P. 20240254 - 20240254

Published: April 3, 2025

At some point in our evolutionary history, humans lost vocal membranes and air sacs, representing an unexpected simplification of the apparatus relative to other great apes. One hypothesis is that these simplifications represent anatomical adaptations for speech because a simpler larynx provides suitably stable tonal source with fewer nonlinear phenomena (NLP). The key assumption NLP reduce intelligibility indirectly supported by studies dysphonia, but it has not been experimentally tested. Here, we manipulate stimuli ranging from single vowels sentences, showing needs be stable, necessarily tonal, readily understood. When task discriminate synthesized monophthong diphthong vowels, continuous (subharmonics, amplitude modulation even deterministic chaos) actually improve vowel perception high-pitched voices, likely resulting dense spectrum reveals formant transitions. Rough-sounding voices also remain highly intelligible when are added recorded words sentences. In contrast, voicing interruptions pitch jumps dramatically intelligibility, interfering contrasts normal intonation. We argue were eliminated human repertoire as evolved speech, only brought under better control.This article part theme issue 'Nonlinear vertebrate vocalizations: mechanisms communicative functions'.

Language: Английский

Citations

4

Spectro-temporal acoustical markers differentiate speech from song across cultures DOI Open Access
Philippe Albouy, Samuel A. Mehr, Roxane S. Hoyer

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Jan. 29, 2023

Abstract Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns vocalizations produced by 369 people living in 21 urban, rural, small-scale societies across six continents. Specific ranges spectral temporal modulations, overlapping within categories societies, significantly differentiate from Machine-learning classification shows that this effect cross-culturally robust, being classified solely their all societies. Listeners unfamiliar with cultures classify using similar cues as machine learning algorithm. Finally, are better able to discriminate song than a broad range other variables, suggesting modulation—a key feature auditory neuronal tuning—accounts for fundamental difference between categories. Two-Sentence Summary What distinguishes singing speaking? The authors show consistent sufficient different throughout world.

Language: Английский

Citations

19

Formant analysis of vertebrate vocalizations: achievements, pitfalls, and promises DOI Creative Commons
W. Tecumseh Fitch, Andrey Anikin, Katarzyna Pisanski

et al.

BMC Biology, Journal Year: 2025, Volume and Issue: 23(1)

Published: April 6, 2025

Abstract When applied to vertebrate vocalizations, source-filter theory, initially developed for human speech, has revolutionized our understanding of animal communication, resulting in major insights into the form and function sounds. However, calls nonverbal vocalizations can differ qualitatively from often having more chaotic higher-frequency sources, making formant measurement challenging. We review considerable achievements “formant revolution” vocal communication research, then highlight several important methodological problems analysis. offer concrete recommendations effectively applying theory non-speech discuss promising avenues future research this area. Brief Formants (vocal tract resonances) play key roles offering researchers exciting promise but also potential pitfalls.

Language: Английский

Citations

0