Extending Networked Mapping Research Middleware into the Browser Sandbox DOI
Matthew Peachey, Kyle S. Smith, Joseph Malloch

et al.

Published: Sept. 30, 2024

Web-based technologies have seen rapid technologi-cal improvements over the past several decades. This is especially true as it relates to audio applications, with specification of WebAudio API enabling users deploy highly performant projects web. Existing literature describes how mappings are a critical component end-to-end projects, including Digital Musical Instruments, Internet Sounds devices, and more. Due this, years research efforts produced mapping middleware for facilitation establishing between sources destinations. paper discusses libmapper [?] ecosystem extended support from sandboxed browser environment. Establishing connectivity on web achieved through websockets backend daemon. In this paper, we discuss implementation details binding well potential use-cases via user-story driven scenarios.

Language: Английский

SoundSignature: What Type of Music do you Like? DOI

Brandon James Carone,

Pablo Ripollés

Published: Sept. 30, 2024

SoundSignature is a music application that integrates custom OpenAI Assistant to analyze users' favorite songs. The system incorporates state-of-the-art Music Information Retrieval (MIR) Python packages combine extracted acoustic/musical features with the assistant's extensive knowledge of artists and bands. Capitalizing on this combined knowledge, leverages semantic audio principles from emerging Internet Sounds (IoS) ecosystem, integrating MIR AI provide users personalized insights into acoustic properties their music, akin musical preference personality report. Users can then interact chatbot explore deeper inquiries about analyses performed how they relate taste. This interactivity transforms application, acting not only as an informative resource familiar and/or songs, but also educational platform enables deepen understanding features, theory, commonly used in signal processing, behind music. Beyond general usability, several well-established open-source musician-specific tools, such chord recognition algorithm (CREMA), source separation (DEMUCS), audio-to-MIDI converter (basic-pitch). These allow without coding skills access advanced, processing algorithms simply by interacting (e.g., you give me stems song?). In paper, we highlight application's innovative potential, present findings pilot user study evaluates its efficacy usability.

Language: Английский

Citations

1

TinyVocos: Neural Vocoders on MCUs DOI

Stefano Ciapponi,

Francesco Paissan, Alberto Ancilotto

et al.

Published: Sept. 30, 2024

Neural Vocoders convert time-frequency representations, such as mel-spectrograms, into corresponding time representations. are essential for generative applications in audio (e.g. text-to-speech and text-to-audio). This paper presents a scalable vocoder architecture small-footprint edge devices, inspired by Vocos adapted with XiNets PhiNets. We test the developed model capabilities qualitatively quantitatively on single-speaker multi-speaker datasets benchmark inference speed memory consumption four microcontrollers. Additionally, we study power an ARM Cortex-M7-powered board. Our results demonstrate feasibility of deploying neural vocoders resource-constrained potentially enabling new Internet Sounds (IoS) Embedded Audio scenarios. best-performing achieves MOS score 3.95/5 while utilizing 1.5MiB FLASH 517KiB RAM consuming 252 mW 1s clip inference.

Language: Английский

Citations

1

Metadata-Assisted Spatial Audio (MASA) – An Overview DOI
Jouni Paulus,

Lasse Laaksonen,

Tapani Pihlajakuja

et al.

Published: Sept. 30, 2024

This paper provides an overview of the novel metadata-assisted spatial audio (MASA) format which is one supported input formats in 3GPP IVAS codec. MASA consists a transport signal with or two channels and parametric metadata describing dominant directional sound diffuseness coherence properties scene. While mainly intended for acquisition from mobile devices, this describes way to determine parameters Ambisonics as example using widely-available source format. Additionally, file ingested by codec described. These descriptions are provided software tools, consisting C-Ianguage implementation described analysis, Python-language library reading writing files.

Language: Английский

Citations

1

INTELLIGENT MUSIC APPLICATIONS: INNOVATIVE SOLUTIONS FOR MUSICIANS AND LISTENERS DOI Open Access
Cihan TABAK

Uluslararası Anadolu Sosyal Bilimler Dergisi, Journal Year: 2023, Volume and Issue: 7(3), P. 752 - 773

Published: Sept. 19, 2023

The incorporation of artificial intelligence and machine learning into intelligent music applications presents fresh avenues for musical expression. These allow the production emotionally responsive pieces by analysing interpreting emotions conveyed within music. Furthermore, they aid collaborative music-making connecting musicians in diverse locations enabling real-time collaboration via cloud-based platforms. objective this research is to present information regarding production, distribution, consumption music, which has a close association with technology. Through document analysis, prospective advantages incorporating industry are assessed from vantage points, potential models areas application. It also proposes further enhance algorithms, guaranteeing their responsible ethical use, unlocking new innovation.

Language: Английский

Citations

3

Towards a System Supporting Music Feedback Exercise in Physical Tele-Rehabilitation DOI
Alexander Carôt, Thomas Hans Fritz,

Katja Englert

et al.

Published: Oct. 26, 2023

For decades music has been used successfully in sports and especially physical rehabilitation order to motivate people increase satisfaction the actual workout. The novel principle of "Music Feedback Exercise (MFE)" allows individually influence according dynamics a person generates — more movement leads vice versa. Research by Max Planck Institute for Human Cognitive Brain Sciences gives evidence positive arousal effects on persons. In cooperation with Anhalt University Applied technical framework developed that is able receive analyze respective sensor data create musical output generated intensity. This paper documents under which assumptions how MFE implemented into Soundjack (also known as fast-music) core technology take advantage versatile GUI options, low-latency audio streaming also networking features.

Language: Английский

Citations

1

Light-Weight Deep Learning Models for Acoustic Scene Classification Using Teacher-Student Scheme and Multiple Spectrograms DOI
Lam Pham, Tin Nguyen, P. C. B. Lam

et al.

Published: Oct. 26, 2023

In this paper, we present a light-weight deep learning based system for acoustic scene classification (ASC), which is armed to be integrated into an Internet of Sound (IoS) with limitation hardware resource. To achieve the ASC model, develop teacher-student scheme two-phase training strategy. first phase (Phase I), Teacher network architecture, shows large model footprint, proposed. After Teacher, embeddings, are feature map teacher, extracted. second II), propose Students presents architectures. We train leveraging embeddings extracted from Teacher. further improve accuracy performance, apply ensemble multiple spectrograms on both and Students. Our experiments conducted DCASE 2023 Task 1 dataset ten target classes ('Airport', 'Bus', 'Metro', 'Metro station', 'Park', 'Public square', 'Shopping mall', 'Street pedestrian', traffic', 'Tram') helps best Student performance 57.4% Development set 55.6% blind Evaluation set, baseline by 14.5% 10.8%, respectively. The also achieves 82.3% three ('Indoor', 'Outdoor', 'Transportation') 88.7 KB memory occupation 29.27 M MACs, potential wide range edge devices.

Language: Английский

Citations

1

Pianist Identification Using Convolutional Neural Networks DOI
Jingjing Tang, Geraínt A. Wiggins,

György Fazekas

et al.

Published: Oct. 26, 2023

This paper presents a comprehensive study of automatic performer identification in expressive piano performances using convolutional neural networks (CNNs) and features. Our work addresses the challenging multi-class classification task identifying virtuoso pianists, which has substantial implications for building dynamic musical instruments with intelligence smart systems. Incorporating recent advancements, we leveraged large-scale performance datasets deep learning techniques. We refined scores by expanding repetitions ornaments more accurate feature extraction. demonstrated capability one-dimensional CNNs pianists based on features analyzed impact input sequence lengths different The proposed model outperforms baseline, achieving 85.3% accuracy 6-way task. dataset proved apt training robust pianist identifier, making contribution to field identification. codes have been released at https://github.com/BetsyTang/PID-CNN.

Language: Английский

Citations

1

Rapid Prototyping of Distributed Musical Things using Web Technologies DOI

Matuszewski Benjamin,

Golvet Aliénor

Published: Oct. 26, 2023

This short paper reports on recent advances in the development of an open-source prototyping platform for creation distributed and embedded musical systems based Web technologies. The proposed architecture is user-grade hardware software aims at fostering rapid-prototyping experimentation context colocated research, performance. After a review related works, describes general design different building blocks system. Then, it exposes first characterization design, concludes with description prototype that highlights features platform.

Language: Английский

Citations

1

Exploring Artificial Intelligence for Advancing Performance Processes and Events in Io3MT DOI

Rômulo Vieira,

Débora C. Muchaluat-Saade, Pablo César

et al.

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 234 - 248

Published: Jan. 1, 2024

Language: Английский

Citations

0

MusicEmo: transformer-based intelligent approach towards music emotion generation and recognition DOI

Ying Xin

Journal of Ambient Intelligence and Humanized Computing, Journal Year: 2024, Volume and Issue: 15(8), P. 3107 - 3117

Published: May 17, 2024

Language: Английский

Citations

0