A Dual Adaptation Approach for EEG-Based Biometric Authentication Using the Ensemble of Riemannian Geometry and NSGA-II DOI

Aashish Khilnani,

Jyoti Singh Kirar,

Ganga Ram Gautam

et al.

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 91 - 109

Published: Dec. 1, 2024

Language: Английский

Electroencephalogram (EEG) Based Fuzzy Logic and Spiking Neural Networks (FLSNN) for Advanced Multiple Neurological Disorder Diagnosis DOI
Shraddha Jain, Ruchi Srivastava

Brain Topography, Journal Year: 2025, Volume and Issue: 38(3)

Published: Feb. 24, 2025

Language: Английский

Citations

2

Deep Learning in EEG-Based BCIs: A Comprehensive Review of Transformer Models, Advantages, Challenges, and Applications DOI Creative Commons
Berdakh Abibullaev, Aigerim Keutayeva, Amin Zollanvari

et al.

IEEE Access, Journal Year: 2023, Volume and Issue: 11, P. 127271 - 127301

Published: Jan. 1, 2023

Brain-computer interfaces (BCIs) have undergone significant advancements in recent years. The integration of deep learning techniques, specifically transformers, has shown promising development research and application domains. Transformers, which were originally designed for natural language processing, now made notable inroads into BCIs, offering a unique self-attention mechanism that adeptly handles the temporal dynamics brain signals. This comprehensive survey delves transformers providing readers with lucid understanding their foundational principles, inherent advantages, potential challenges, diverse applications. In addition to discussing benefits we also address limitations, such as computational overhead, interpretability concerns, data-intensive nature these models, well-rounded analysis. Furthermore, paper sheds light on myriad BCI applications benefited from incorporation transformers. These span motor imagery decoding, emotion recognition, sleep stage analysis novel ventures speech reconstruction. review serves holistic guide researchers practitioners, panoramic view transformative landscape. With inclusion examples references, will gain deeper topic its significance field.

Language: Английский

Citations

31

Transformers in EEG Analysis: A Review of Architectures and Applications in Motor Imagery, Seizure, and Emotion Classification DOI Creative Commons
Elnaz Vafaei, Mohammad Hosseini

Sensors, Journal Year: 2025, Volume and Issue: 25(5), P. 1293 - 1293

Published: Feb. 20, 2025

Transformers have rapidly influenced research across various domains. With their superior capability to encode long sequences, they demonstrated exceptional performance, outperforming existing machine learning methods. There has been a rapid increase in the development of transformer-based models for EEG analysis. The high volumes recently published papers highlight need further studies exploring transformer architectures, key components, and employed particularly studies. This paper aims explore four major architectures: Time Series Transformer, Vision Graph Attention hybrid models, along with variants recent We categorize according most frequent applications motor imagery classification, emotion recognition, seizure detection. also highlights challenges applying transformers datasets reviews data augmentation transfer as potential solutions explored years. Finally, we provide summarized comparison reported results. hope this serves roadmap researchers interested employing architectures

Language: Английский

Citations

1

Multi-scale spatiotemporal attention network for neuron based motor imagery EEG classification DOI
Venkata Chunduri, Yassine Aoudni, Samiullah Khan

et al.

Journal of Neuroscience Methods, Journal Year: 2024, Volume and Issue: 406, P. 110128 - 110128

Published: March 28, 2024

Language: Английский

Citations

7

Data Constraints and Performance Optimization for Transformer-Based Models in EEG-Based Brain-Computer Interfaces: A Survey DOI Creative Commons
Aigerim Keutayeva, Berdakh Abibullaev

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 62628 - 62647

Published: Jan. 1, 2024

This work reviews the critical challenge of data scarcity in developing Transformer-based models for Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs), specifically focusing on Motor Imagery (MI) decoding. While EEG-BCIs hold immense promise applications communication, rehabilitation, and human-computer interaction, limited availability hinders use advanced deep-learning such as Transformers. In particular, this paper comprehensively analyzes three key strategies to address scarcity: augmentation, transfer learning, inherent attention mechanisms Data augmentation techniques artificially expand datasets, enhancing model generalizability by exposing them a wider range signal patterns. Transfer learning utilizes pre-trained from related domains, leveraging their learned knowledge overcome limitations small EEG datasets. By thoroughly reviewing current research methodologies, underscores importance these overcoming scarcity. It critically examines imposed datasets showcases potential solutions being developed challenges. comprehensive survey, intersection technological advancements, aims provide analysis state-of-the-art EEG-BCI development. identifying gaps suggesting future directions, encourages further exploration innovation field. Ultimately, contribute advancement more accessible, efficient, precise systems addressing fundamental

Language: Английский

Citations

6

EISATC-Fusion: Inception Self-Attention Temporal Convolutional Network Fusion for Motor Imagery EEG Decoding DOI Creative Commons
Guangjin Liang, Dianguo Cao, Jinqiang Wang

et al.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, Journal Year: 2024, Volume and Issue: 32, P. 1535 - 1545

Published: Jan. 1, 2024

The motor imagery brain-computer interface (MI-BCI) based on electroencephalography (EEG) is a widely used human-machine paradigm.However, due to the non-stationarity and individual differences among subjects in EEG signals, decoding accuracy limited, affecting application of MI-BCI.In this paper, we propose EISATC-Fusion model for MI decoding, consisting inception block, multi-head selfattention (MSA), temporal convolutional network (TCN), layer fusion.Specifically, design DS Inception block extract multi-scale frequency band information.And new cnnCosMSA module CNN cos attention solve collapse improve interpretability model.The TCN improved by depthwise separable convolution reduces parameters fusion consists feature decision fusion, fully utilizing features output enhances robustness model.We two-stage training strategy training.Early stopping prevent overfitting, loss validation set are as indicators early stopping.The proposed achieves within-subject classification accuracies 84.57% 87.58% BCI Competition IV Datasets 2a 2b, respectively.And cross-subject 67.42% 71.23% (by transfer learning) when with two sessions one session Dataset 2a, respectively.The demonstrated through weight visualization method.Index Terms-Brain-computer (BCI)

Language: Английский

Citations

4

Scalogram sets based motor imagery EEG classification using modified vision transformer: A comparative study on scalogram sets DOI

Balendra Kumar,

Pranshu C. B. S. Negi, Neeraj Sharma

et al.

Biomedical Signal Processing and Control, Journal Year: 2025, Volume and Issue: 104, P. 107640 - 107640

Published: Jan. 28, 2025

Language: Английский

Citations

0

Classification Analysis of Motor Imagery EEG Signals Using Bidirectional LSTM Model DOI

R. Helen,

T. Thenmozhi,

S. Mythili

et al.

Lecture notes in electrical engineering, Journal Year: 2025, Volume and Issue: unknown, P. 159 - 173

Published: Jan. 1, 2025

Language: Английский

Citations

0

EEG Signal Prediction for Motor Imagery Classification in Brain–Computer Interfaces DOI Creative Commons
Óscar Wladimir Gómez Morales, Diego Collazos-Huertas, Andrés Marino Álvarez-Meza

et al.

Sensors, Journal Year: 2025, Volume and Issue: 25(7), P. 2259 - 2259

Published: April 3, 2025

Brain-computer interfaces (BCIs) based on motor imagery (MI) generally require EEG signals recorded from a large number of electrodes distributed across the cranial surface to achieve accurate MI classification. Not only does this entail long preparation times and high costs, but it also carries risk losing valuable information when an electrode is damaged, further limiting its practical applicability. In study, signal prediction-based method proposed accuracy in classification using small electrodes. The prediction model was constructed elastic net regression technique, allowing for estimation 22 complete channels just 8 centrally located channels. predicted were used feature extraction results obtained indicate notable efficacy method, showing average performance 78.16% accuracy. demonstrated superior compared traditional approach that few-channel achieved better than full-channel EEG. Although varies among subjects, 62.30% impressive 95.24%, these data capability provide estimates reduced set This highlights potential be implemented MI-based BCI applications, thereby mitigating time cost constraints associated with systems density

Language: Английский

Citations

0

AMFTCNet: A multi-level attention-based multi-scale fusion temporal convolutional network for decoding MI-EEG signals DOI
Qiang Huang, Yuan Yang, Jun Li

et al.

Biomedical Signal Processing and Control, Journal Year: 2025, Volume and Issue: 108, P. 107916 - 107916

Published: May 1, 2025

Citations

0