
Sensors, Год журнала: 2025, Номер 25(5), С. 1471 - 1471
Опубликована: Фев. 27, 2025
Advancements in music emotion prediction are driving AI-driven algorithmic composition, enabling the generation of complex melodies. However, bridging neural and auditory domains remains challenging due to semantic gap between brain-derived low-level features high-level musical concepts, making alignment computationally demanding. This study proposes a deep learning framework for generating MIDI sequences aligned with labeled predictions through supervised feature extraction from domains. EEGNet is employed process data, while an autoencoder-based piano algorithm handles data. To address modality heterogeneity, Centered Kernel Alignment incorporated enhance separation emotional states. Furthermore, regression applied reduce intra-subject variability extracted Electroencephalography (EEG) patterns, followed by clustering latent representations into denser partitions improve reconstruction quality. Using metrics, evaluation on real-world data shows that proposed approach improves classification (namely, arousal valence) system’s ability produce better preserve temporal alignment, tonal consistency, structural integrity. Subject-specific analysis reveals subjects stronger imagery paradigms produced higher-quality outputs, as their patterns more closely training In contrast, weaker performance exhibited were less consistent.
Язык: Английский