A Data Augmentation Method for Motor Imagery EEG Signals Based on DCGAN-GP Network DOI Creative Commons
Xiuli Du, Xiaohui Ding,

Meiling Xi

и другие.

Brain Sciences, Год журнала: 2024, Номер 14(4), С. 375 - 375

Опубликована: Апрель 12, 2024

Motor imagery electroencephalography (EEG) signals have garnered attention in brain–computer interface (BCI) research due to their potential promoting motor rehabilitation and control. However, the limited availability of labeled data poses challenges for training robust classifiers. In this study, we propose a novel augmentation method utilizing an improved Deep Convolutional Generative Adversarial Network with Gradient Penalty (DCGAN-GP) address issue. We transformed raw EEG into two-dimensional time–frequency maps employed DCGAN-GP network generate synthetic representations resembling real data. Validation experiments were conducted on BCI IV 2b dataset, comparing performance classifiers trained augmented unaugmented Results demonstrated that exhibit enhanced robustness across multiple subjects achieve higher classification accuracy. Our findings highlight effectiveness DCGAN-GP-generated improve classifier distinguishing different tasks. Thus, proposed based offers promising avenue enhancing system performance, overcoming scarcity challenges, bolstering robustness, thereby providing substantial support broader adoption technology real-world applications.

Язык: Английский

Decoding speech perception from non-invasive brain recordings DOI Creative Commons
Alexandre Défossez, Charlotte Caucheteux,

Jérémy Rapin

и другие.

Nature Machine Intelligence, Год журнала: 2023, Номер 5(10), С. 1097 - 1107

Опубликована: Окт. 5, 2023

Язык: Английский

Процитировано

78

The speech neuroprosthesis DOI
Alexander B. Silva, Kaylo T. Littlejohn, Jessie R. Liu

и другие.

Nature reviews. Neuroscience, Год журнала: 2024, Номер 25(7), С. 473 - 492

Опубликована: Май 14, 2024

Язык: Английский

Процитировано

24

Subject-independent meta-learning framework towards optimal training of EEG-based classifiers DOI

H.W. Ng,

Cuntai Guan

Neural Networks, Год журнала: 2024, Номер 172, С. 106108 - 106108

Опубликована: Янв. 6, 2024

Язык: Английский

Процитировано

9

On the role of generative artificial intelligence in the development of brain-computer interfaces DOI Creative Commons
Seif Eldawlatly

BMC Biomedical Engineering, Год журнала: 2024, Номер 6(1)

Опубликована: Май 2, 2024

Abstract Since their inception more than 50 years ago, Brain-Computer Interfaces (BCIs) have held promise to compensate for functions lost by people with disabilities through allowing direct communication between the brain and external devices. While research throughout past decades has demonstrated feasibility of BCI act as a successful assistive technology, widespread use outside lab is still beyond reach. This can be attributed number challenges that need addressed practical including limited data availability, temporal spatial resolutions signals recorded non-invasively inter-subject variability. In addition, very long time, development been mainly confined specific simple patterns, while developing other applications relying on complex patterns proven infeasible. Generative Artificial Intelligence (GAI) recently emerged an artificial intelligence domain in which trained models used generate new properties resembling available data. Given enhancements observed domains possess similar development, GAI employed multitude synthetic activity; thereby, augmenting activity. Here, brief review recent adoption techniques overcome aforementioned provided demonstrating achieved using EEG data, enhancing spatiotemporal resolution cross-subject performance systems implementing end-to-end applications. could represent means would transformed into prevalent thereby improving quality life disabilities, helping adopting emerging human-computer interaction technology general use.

Язык: Английский

Процитировано

7

CCSUMSP: A cross-subject Chinese speech decoding framework with unified topology and multi-modal semantic pre-training DOI
Shuai Huang, Yongxiong Wang,

Huan Luo

и другие.

Information Fusion, Год журнала: 2025, Номер unknown, С. 103022 - 103022

Опубликована: Фев. 1, 2025

Язык: Английский

Процитировано

1

Open Vocabulary Electroencephalography-to-Text Decoding and Zero-Shot Sentiment Classification DOI Open Access

Zhenhailong Wang,

Heng Ji

Proceedings of the AAAI Conference on Artificial Intelligence, Год журнала: 2022, Номер 36(5), С. 5350 - 5358

Опубликована: Июнь 28, 2022

State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks. However, current approaches are limited to small closed vocabularies which far enough for natural communication. In addition, most of the high-performing require data invasive devices (e.g., ECoG). this paper, we extend problem open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence and zero-shot sentence sentiment classification on reading tasks. We hypothesis that human functions as a special text encoder propose novel framework leveraging pre-trained models BART). Our model achieves 40.1% BLEU-1 score EEG-To-Text 55.6% F1 EEG-based ternary classification, significantly outperforms supervised baselines. Furthermore, show our proposed can handle various subjects sources, showing potential high-performance system once sufficient is available. The code made publicly available research purpose at https://github.com/MikeWangWZHL/EEG-To-Text.

Язык: Английский

Процитировано

24

Automated Recognition of Imagined Commands From EEG Signals Using Multivariate Fast and Adaptive Empirical Mode Decomposition Based Method DOI
Shaswati Dash, Rajesh Kumar Tripathy, Ganapati Panda

и другие.

IEEE Sensors Letters, Год журнала: 2022, Номер 6(2), С. 1 - 4

Опубликована: Янв. 13, 2022

In this letter, a novel automated approach for recognizing imagined commands using multichannel electroencephalogram (MEEG) signals is presented. The multivariate fast and adaptive empirical mode decomposition method decomposes the MEEG into various modes. slope domain entropy $L_1$ -norm features are obtained from modes of signals. machine learning models such as k -nearest neighbor, sparse representation classifier, dictionary (DL) techniques used command classification tasks. efficacy proposed evaluated public database input has achieved average accuracy values 60.72, 59.73, 58.78% DL model selected left versus right, up down, forward backward based categorization

Язык: Английский

Процитировано

22

Speech imagery decoding using EEG signals and deep learning: A survey DOI
Liying Zhang, Yueying Zhou, Peiliang Gong

и другие.

IEEE Transactions on Cognitive and Developmental Systems, Год журнала: 2024, Номер 17(1), С. 22 - 39

Опубликована: Июль 19, 2024

Язык: Английский

Процитировано

5

Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition DOI Creative Commons
Foteini Liwicki, Vibha Gupta, Rajkumar Saini

и другие.

Scientific Data, Год журнала: 2023, Номер 10(1)

Опубликована: Июнь 13, 2023

Abstract The recognition of inner speech, which could give a ‘voice’ to patients that have no ability speak or move, is challenge for brain-computer interfaces (BCIs). A shortcoming the available datasets they do not combine modalities increase performance speech recognition. Multimodal brain data enable fusion neuroimaging with complimentary properties, such as high spatial resolution functional magnetic resonance imaging (fMRI) and temporal electroencephalography (EEG), therefore are promising decoding speech. This paper presents first publicly bimodal dataset containing EEG fMRI acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants an task words in either social numerical category. Each 8-word stimuli assessed 40 trials, resulting 320 trials each modality participant. aim this work provide on contributing towards prostheses.

Язык: Английский

Процитировано

10

Generative Listener EEG for Speech Emotion Recognition Using Generative Adversarial Networks With Compressed Sensing DOI
Chang Jiang, Zhixin Zhang, Zelin Wang

и другие.

IEEE Journal of Biomedical and Health Informatics, Год журнала: 2024, Номер 28(4), С. 2025 - 2036

Опубликована: Янв. 30, 2024

Currently, emotional features in speech emotion recognition are typically extracted from the speeches, However, accuracy can be influenced by factors such as semantics, language, and cross-speech datasets. Achieving consistent judgment with human listeners is a key challenge for AI to address. Electroencephalography (EEG) signals prove an effective means of capturing authentic meaningful information humans. This positions EEG promising tool detecting cues conveyed speech. In this study, we proposed novel approach named CS-GAN that generates listener EEGs response speaker's speech, specifically aimed at enhancing cross-subject recognition. We utilized generative adversarial networks (GANs) establish mapping relationship between generate stimulus-induced EEGs. Furthermore, integrated compressive sensing theory (CS) into GAN-based generation method, thereby fidelity diversity generated The were then processed using CNN-LSTM model identify categories By averaging these EEGs, obtained event-related potentials (ERPs) improve capability method. experimental results demonstrate method outperform real 9.31% tasks. ERPs show improvement 43.59%, providing evidence effectiveness

Язык: Английский

Процитировано

4