Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 91 - 109
Опубликована: Дек. 1, 2024
Язык: Английский
Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 91 - 109
Опубликована: Дек. 1, 2024
Язык: Английский
IEEE Access, Год журнала: 2023, Номер 11, С. 127271 - 127301
Опубликована: Янв. 1, 2023
Brain-computer interfaces (BCIs) have undergone significant advancements in recent years. The integration of deep learning techniques, specifically transformers, has shown promising development research and application domains. Transformers, which were originally designed for natural language processing, now made notable inroads into BCIs, offering a unique self-attention mechanism that adeptly handles the temporal dynamics brain signals. This comprehensive survey delves transformers providing readers with lucid understanding their foundational principles, inherent advantages, potential challenges, diverse applications. In addition to discussing benefits we also address limitations, such as computational overhead, interpretability concerns, data-intensive nature these models, well-rounded analysis. Furthermore, paper sheds light on myriad BCI applications benefited from incorporation transformers. These span motor imagery decoding, emotion recognition, sleep stage analysis novel ventures speech reconstruction. review serves holistic guide researchers practitioners, panoramic view transformative landscape. With inclusion examples references, will gain deeper topic its significance field.
Язык: Английский
Процитировано
32Brain Topography, Год журнала: 2025, Номер 38(3)
Опубликована: Фев. 24, 2025
Язык: Английский
Процитировано
2Sensors, Год журнала: 2025, Номер 25(5), С. 1293 - 1293
Опубликована: Фев. 20, 2025
Transformers have rapidly influenced research across various domains. With their superior capability to encode long sequences, they demonstrated exceptional performance, outperforming existing machine learning methods. There has been a rapid increase in the development of transformer-based models for EEG analysis. The high volumes recently published papers highlight need further studies exploring transformer architectures, key components, and employed particularly studies. This paper aims explore four major architectures: Time Series Transformer, Vision Graph Attention hybrid models, along with variants recent We categorize according most frequent applications motor imagery classification, emotion recognition, seizure detection. also highlights challenges applying transformers datasets reviews data augmentation transfer as potential solutions explored years. Finally, we provide summarized comparison reported results. hope this serves roadmap researchers interested employing architectures
Язык: Английский
Процитировано
1Journal of Neuroscience Methods, Год журнала: 2024, Номер 406, С. 110128 - 110128
Опубликована: Март 28, 2024
Язык: Английский
Процитировано
7IEEE Access, Год журнала: 2024, Номер 12, С. 62628 - 62647
Опубликована: Янв. 1, 2024
This work reviews the critical challenge of data scarcity in developing Transformer-based models for Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs), specifically focusing on Motor Imagery (MI) decoding. While EEG-BCIs hold immense promise applications communication, rehabilitation, and human-computer interaction, limited availability hinders use advanced deep-learning such as Transformers. In particular, this paper comprehensively analyzes three key strategies to address scarcity: augmentation, transfer learning, inherent attention mechanisms Data augmentation techniques artificially expand datasets, enhancing model generalizability by exposing them a wider range signal patterns. Transfer learning utilizes pre-trained from related domains, leveraging their learned knowledge overcome limitations small EEG datasets. By thoroughly reviewing current research methodologies, underscores importance these overcoming scarcity. It critically examines imposed datasets showcases potential solutions being developed challenges. comprehensive survey, intersection technological advancements, aims provide analysis state-of-the-art EEG-BCI development. identifying gaps suggesting future directions, encourages further exploration innovation field. Ultimately, contribute advancement more accessible, efficient, precise systems addressing fundamental
Язык: Английский
Процитировано
6IEEE Transactions on Neural Systems and Rehabilitation Engineering, Год журнала: 2024, Номер 32, С. 1535 - 1545
Опубликована: Янв. 1, 2024
The motor imagery brain-computer interface (MI-BCI) based on electroencephalography (EEG) is a widely used human-machine paradigm.However, due to the non-stationarity and individual differences among subjects in EEG signals, decoding accuracy limited, affecting application of MI-BCI.In this paper, we propose EISATC-Fusion model for MI decoding, consisting inception block, multi-head selfattention (MSA), temporal convolutional network (TCN), layer fusion.Specifically, design DS Inception block extract multi-scale frequency band information.And new cnnCosMSA module CNN cos attention solve collapse improve interpretability model.The TCN improved by depthwise separable convolution reduces parameters fusion consists feature decision fusion, fully utilizing features output enhances robustness model.We two-stage training strategy training.Early stopping prevent overfitting, loss validation set are as indicators early stopping.The proposed achieves within-subject classification accuracies 84.57% 87.58% BCI Competition IV Datasets 2a 2b, respectively.And cross-subject 67.42% 71.23% (by transfer learning) when with two sessions one session Dataset 2a, respectively.The demonstrated through weight visualization method.Index Terms-Brain-computer (BCI)
Язык: Английский
Процитировано
4Frontiers in Computer Science, Год журнала: 2024, Номер 6
Опубликована: Июнь 10, 2024
The advancement of communication and internet technology has brought risks to network security. Thus, Intrusion Detection Systems (IDS) was developed combat malicious attacks. However, IDSs still struggle with accuracy, false alarms, detecting new intrusions. Therefore, organizations are using Machine Learning (ML) Deep (DL) algorithms in IDS for more accurate attack detection. This paper provides an overview IDS, including its classes methods, the detected attacks as well dataset, metrics, performance indicators used. A thorough examination recent publications on IDS-based solutions is conducted, evaluating their strengths weaknesses, a discussion potential implications, research challenges, trends. We believe that this comprehensive review covers most advances developments ML DL-based also facilitates future into emerging Artificial Intelligence (AI) address growing complexity cybersecurity challenges.
Язык: Английский
Процитировано
3Biomedical Signal Processing and Control, Год журнала: 2024, Номер 97, С. 106717 - 106717
Опубликована: Авг. 14, 2024
Язык: Английский
Процитировано
3Biomedical Signal Processing and Control, Год журнала: 2025, Номер 104, С. 107640 - 107640
Опубликована: Янв. 28, 2025
Язык: Английский
Процитировано
0Lecture notes in electrical engineering, Год журнала: 2025, Номер unknown, С. 159 - 173
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
0