Can a human sing with an unseen artificial partner? Coordination dynamics when singing with an unseen human or artificial partner DOI Creative Commons

Rina Nishiyama,

Tetsushi Nonaka

Frontiers in Robotics and AI, Год журнала: 2024, Номер 11

Опубликована: Дек. 9, 2024

This study investigated whether a singer's coordination patterns differ when singing with an unseen human partner versus artificial (VOCALOID 6 voice synthesis software). We used cross-correlation analysis to compare the correlation of amplitude envelope time series between partner's and participant's voices. also conducted Granger causality test determine past helps predict future participants, or if reverse is true. found more pronounced characteristics anticipatory synchronization increased similarity in unfolding dynamics envelopes human-partner condition compared artificial-partner condition, despite tempo fluctuations condition. The results suggested that subtle qualities voice, possibly stemming from intrinsic body, may contain information enables agents align their behavior partner.

Язык: Английский

Cortical tracking of speech is reduced in adults who stutter when listening for speaking DOI Creative Commons
Simone Gastaldon, Pierpaolo Busan, Nicola Molinaro

и другие.

bioRxiv (Cold Spring Harbor Laboratory), Год журнала: 2024, Номер unknown

Опубликована: Фев. 25, 2024

Abstract Purpose Investigate cortical tracking of speech (CTS) in adults who stutter (AWS) compared to typically fluent (TFA) test the involvement speech-motor network auditory information. Method Participants’ EEG was recorded while they either had simply listen sentences (listening only) or complete them by naming a picture (listening-for-speaking), thus manipulating upcoming production. We analyzed speech-brain coherence and brain connectivity during listening. Results During listening-for-speaking task, AWS exhibited reduced CTS 3-5 Hz range (theta), corresponding syllabic rhythm. The effect localized left inferior parietal right pre-/supplementary motor regions. Connectivity analyses revealed that TFA stronger information transfer theta both tasks fronto-temporo-parietal When considering whole sample participants, increased from superior temporal cortex sensorimotor correlated with faster times task. Conclusions Atypical functioning stuttering also impacts perception, especially situations requiring articulatory alertness. frontal (pre-) regions is highlighted. Speech perception individuals deficits should be further investigated, when smooth transitioning between listening speaking required, such as real-life conversational settings.

Язык: Английский

Процитировано

0

Cortical Representation of the Glottal Events during Speech Production DOI Creative Commons
John P. Veillette,

Jacob Rosen,

Daniel Margoliash

и другие.

bioRxiv (Cold Spring Harbor Laboratory), Год журнала: 2024, Номер unknown

Опубликована: Авг. 7, 2024

Abstract To produce complex motor behaviors such as speech, the nervous system must accommodate fact that sensory feedback is delayed relative to actual movements; by most accounts, this accomplished using an internal model predicts current state of body’s periphery from recent output. Here we show onsets events in human glottal waveform, measured via electroglottography, are encoded electroencephalogram (EEG) during speech production, maximally reflected at zero time lag. Conversely, event times can be decoded EEG. Furthermore, after prolonged exposure auditory feedback, subjects a robust recalibration their behaviorally observed threshold for detecting auditory-motor mismatches, and decoding models perform best normal speaking conditions also showed shift predicted events. This suggests performance driven plastic representations peripheral timing (while ruling out movement artifact concerns). Our results provide missing component mechanism associate specific with neurons gave rise those movements, mirroring observation synchronous bursting songbird HVC trajectories biophysical parameters control bird voicing.

Язык: Английский

Процитировано

0

Cortical Tracking of Speech Is Reduced in Adults Who Stutter When Listening for Speaking DOI
Simone Gastaldon, Pierpaolo Busan, Nicola Molinaro

и другие.

Journal of Speech Language and Hearing Research, Год журнала: 2024, Номер 67(11), С. 4339 - 4357

Опубликована: Окт. 22, 2024

Purpose: The purpose of this study was to investigate cortical tracking speech (CTS) in adults who stutter (AWS) compared typically fluent (TFAs) test the involvement speech-motor network rhythmic information. Method: Participants' electroencephalogram recorded while they simply listened sentences (listening only) or completed them by naming a picture for speaking), thus manipulating upcoming production. We analyzed speech–brain coherence and brain connectivity during listening. Results: During listening-for-speaking task, AWS exhibited reduced CTS 3- 5-Hz range (theta), corresponding syllabic rhythm. effect localized left inferior parietal right pre/supplementary motor regions. Connectivity analyses revealed that TFAs had stronger information transfer theta both tasks fronto-temporo-parietal When considering whole sample participants, increased from superior temporal cortex sensorimotor correlated with faster times task. Conclusions: Atypical functioning stuttering impacts perception, especially situations requiring articulatory alertness. frontal (pre)motor regions is highlighted. Further investigation needed into perception individuals deficits, when smooth transitioning between listening speaking required, such as real-life conversational settings. Supplemental Material: https://doi.org/10.23641/asha.27234885

Язык: Английский

Процитировано

0

Can a human sing with an unseen artificial partner? Coordination dynamics when singing with an unseen human or artificial partner DOI Creative Commons

Rina Nishiyama,

Tetsushi Nonaka

Frontiers in Robotics and AI, Год журнала: 2024, Номер 11

Опубликована: Дек. 9, 2024

This study investigated whether a singer's coordination patterns differ when singing with an unseen human partner versus artificial (VOCALOID 6 voice synthesis software). We used cross-correlation analysis to compare the correlation of amplitude envelope time series between partner's and participant's voices. also conducted Granger causality test determine past helps predict future participants, or if reverse is true. found more pronounced characteristics anticipatory synchronization increased similarity in unfolding dynamics envelopes human-partner condition compared artificial-partner condition, despite tempo fluctuations condition. The results suggested that subtle qualities voice, possibly stemming from intrinsic body, may contain information enables agents align their behavior partner.

Язык: Английский

Процитировано

0