Biomedical Signal Processing and Control, Год журнала: 2025, Номер 107, С. 107838 - 107838
Опубликована: Март 26, 2025
Язык: Английский
Biomedical Signal Processing and Control, Год журнала: 2025, Номер 107, С. 107838 - 107838
Опубликована: Март 26, 2025
Язык: Английский
Deleted Journal, Год журнала: 2024, Номер unknown
Опубликована: Июнь 5, 2024
Abstract Skin cancer is one of the most frequently occurring cancers worldwide, and early detection crucial for effective treatment. Dermatologists often face challenges such as heavy data demands, potential human errors, strict time limits, which can negatively affect diagnostic outcomes. Deep learning–based systems offer quick, accurate testing enhanced research capabilities, providing significant support to dermatologists. In this study, we Swin Transformer architecture by implementing hybrid shifted window-based multi-head self-attention (HSW-MSA) in place conventional (SW-MSA). This adjustment enables model more efficiently process areas skin overlap, capture finer details, manage long-range dependencies, while maintaining memory usage computational efficiency during training. Additionally, study replaces standard multi-layer perceptron (MLP) with a SwiGLU-based MLP, an upgraded version gated linear unit (GLU) module, achieve higher accuracy, faster training speeds, better parameter efficiency. The modified model-base was evaluated using publicly accessible ISIC 2019 dataset eight classes compared against popular convolutional neural networks (CNNs) cutting-edge vision transformer (ViT) models. exhaustive assessment on unseen test dataset, proposed Swin-Base demonstrated exceptional performance, achieving accuracy 89.36%, recall 85.13%, precision 88.22%, F1-score 86.65%, surpassing all previously reported deep learning models documented literature.
Язык: Английский
Процитировано
22Cluster Computing, Год журнала: 2024, Номер 27(8), С. 11187 - 11212
Опубликована: Май 20, 2024
Abstract The early and accurate diagnosis of brain tumors is critical for effective treatment planning, with Magnetic Resonance Imaging (MRI) serving as a key tool in the non-invasive examination such conditions. Despite advancements Computer-Aided Diagnosis (CADx) systems powered by deep learning, challenge accurately classifying from MRI scans persists due to high variability tumor appearances subtlety early-stage manifestations. This work introduces novel adaptation EfficientNetv2 architecture, enhanced Global Attention Mechanism (GAM) Efficient Channel (ECA), aimed at overcoming these hurdles. enhancement not only amplifies model’s ability focus on salient features within complex images but also significantly improves classification accuracy tumors. Our approach distinguishes itself meticulously integrating attention mechanisms that systematically enhance feature extraction, thereby achieving superior performance detecting broad spectrum Demonstrated through extensive experiments large public dataset, our model achieves an exceptional high-test 99.76%, setting new benchmark MRI-based classification. Moreover, incorporation Grad-CAM visualization techniques sheds light decision-making process, offering transparent interpretable insights are invaluable clinical assessment. By addressing limitations inherent previous models, this study advances field medical imaging analysis highlights pivotal role enhancing interpretability learning models diagnosis. research sets stage advanced CADx systems, patient care outcomes.
Язык: Английский
Процитировано
20Neuroscience, Год журнала: 2025, Номер unknown
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
16Biomedical Signal Processing and Control, Год журнала: 2025, Номер 104, С. 107627 - 107627
Опубликована: Янв. 28, 2025
Язык: Английский
Процитировано
12Scientific Reports, Год журнала: 2025, Номер 15(1)
Опубликована: Фев. 10, 2025
Skin cancer represents a significant global health concern, where early and precise diagnosis plays pivotal role in improving treatment efficacy patient survival rates. Nonetheless, the inherent visual similarities between benign malignant lesions pose substantial challenges to accurate classification. To overcome these obstacles, this study proposes an innovative hybrid deep learning model that combines ConvNeXtV2 blocks separable self-attention mechanisms, tailored enhance feature extraction optimize classification performance. The inclusion of initial two stages is driven by their ability effectively capture fine-grained local features subtle patterns, which are critical for distinguishing visually similar lesion types. Meanwhile, adoption later allows selectively prioritize diagnostically relevant regions while minimizing computational complexity, addressing inefficiencies often associated with traditional mechanisms. was comprehensively trained validated on ISIC 2019 dataset, includes eight distinct skin categories. Advanced methodologies such as data augmentation transfer were employed further robustness reliability. proposed architecture achieved exceptional performance metrics, 93.48% accuracy, 93.24% precision, 90.70% recall, 91.82% F1-score, outperforming over ten Convolutional Neural Network (CNN) based Vision Transformer (ViT) models tested under comparable conditions. Despite its robust performance, maintains compact design only 21.92 million parameters, making it highly efficient suitable deployment. Proposed Model demonstrates accuracy generalizability across diverse classes, establishing reliable framework clinical practice.
Язык: Английский
Процитировано
9Journal Of Big Data, Год журнала: 2025, Номер 12(1)
Опубликована: Фев. 6, 2025
Язык: Английский
Процитировано
7Bioengineering, Год журнала: 2025, Номер 12(1), С. 62 - 62
Опубликована: Янв. 13, 2025
The timely and accurate detection of brain tumors is crucial for effective medical intervention, especially in resource-constrained settings. This study proposes a lightweight efficient RetinaNet variant tailored edge device deployment. model reduces computational overhead while maintaining high accuracy by replacing the computationally intensive ResNet backbone with MobileNet leveraging depthwise separable convolutions. modified achieves an average precision (AP) 32.1, surpassing state-of-the-art models small tumor (APS: 14.3) large localization (APL: 49.7). Furthermore, significantly costs, making real-time analysis feasible on low-power hardware. Clinical relevance key focus this work. proposed addresses diagnostic challenges small, variable-sized often overlooked existing methods. Its architecture enables portable devices, bridging gap accessibility underserved regions. Extensive experiments BRATS dataset demonstrate robustness across sizes configurations, confidence scores consistently exceeding 81%. advancement holds potential improving early detection, particularly remote areas lacking advanced infrastructure, thereby contributing to better patient outcomes broader AI-driven tools.
Язык: Английский
Процитировано
3Computers in Biology and Medicine, Год журнала: 2025, Номер 188, С. 109790 - 109790
Опубликована: Фев. 13, 2025
Язык: Английский
Процитировано
3Results in Engineering, Год журнала: 2024, Номер unknown, С. 103692 - 103692
Опубликована: Дек. 1, 2024
Язык: Английский
Процитировано
13Scientific Reports, Год журнала: 2025, Номер 15(1)
Опубликована: Фев. 21, 2025
Oral cavity cancer exhibits high morbidity and mortality rates. Therefore, it is essential to diagnose the disease at an early stage. Machine learning convolution neural networks (CNN) are powerful tools for diagnosing mouth oral cancer. In this study, we design a lightweight explainable network (LWENet) with label-guided attention (LGA) provide second opinion expert. The LWENet contains depth-wise separable layers reduce computation costs. Moreover, LGA module provides label consistency neighbor pixel improves spatial features. Furthermore, AMSA (axial multi-head self-attention) based ViT encoder incorporated in model global attention. Our (vision transformer) computationally efficient compared classical encoder. We tested LWRNet performance on MOD (mouth disease) OCI (oral image) datasets, results other CNN methods. achieved precision F1-scores of 96.97% 98.90% dataset, 99.48% 98.23% respectively. By incorporating Grad-CAM, visualize decision-making process, enhancing interpretability. This work demonstrates potential facilitating detection.
Язык: Английский
Процитировано
1