Expert Systems with Applications, Journal Year: 2024, Volume and Issue: 258, P. 125094 - 125094
Published: Aug. 22, 2024
Language: Английский
Expert Systems with Applications, Journal Year: 2024, Volume and Issue: 258, P. 125094 - 125094
Published: Aug. 22, 2024
Language: Английский
Biomedical Signal Processing and Control, Journal Year: 2024, Volume and Issue: 101, P. 107221 - 107221
Published: Nov. 20, 2024
Language: Английский
Citations
12Biomedical & Pharmacology Journal, Journal Year: 2025, Volume and Issue: 18(December Spl Edition), P. 99 - 119
Published: Jan. 20, 2025
Brain tumor identification through Bio-medical magnetic resonance imaging (MRI) presents a critical challenge in diagnostic imaging, where high accuracy is essential for informed treatment planning. Traditional methods face limitations segmentation precision, leading to increased misdiagnosis risks. This study introduces hybrid deep-learning model integrating Vision Transformer (ViT) and Capsule Network (CapsNet) improve brain classification accuracy. The aims enhance sensitivity specificity categorization. Utilising the BRATS2020 dataset, which comprises 6,000 MRI scans across four classes (meningioma, glioma, pituitary tumor, no tumor), dataset was divided into an 80-20 training-testing split. Data pre-processing included scaling, normalization, feature augmentation robustness. ViT-CapsNet assessed alongside individual ViT CapsNet performance using accuracy, recall, F1-score, AUC-ROC metrics. achieved of 90%, precision recall 89%, F1-score 89.5%, outperforming models. yielded 4-5% improvement types, with notable gains gliomas tumors. Unlike prior methods, achieving 88% our demonstrates superior 90%. approach offers promising solution more accurate detection. Future research could explore refining fusion techniques, advanced interpretability expanding model’s application various clinical environments.
Language: Английский
Citations
1Image and Vision Computing, Journal Year: 2024, Volume and Issue: 147, P. 105064 - 105064
Published: May 3, 2024
Language: Английский
Citations
6Information Fusion, Journal Year: 2024, Volume and Issue: 112, P. 102592 - 102592
Published: July 20, 2024
Language: Английский
Citations
4Cancer Imaging, Journal Year: 2024, Volume and Issue: 24(1)
Published: Dec. 23, 2024
Abstract Objective This study aims to evaluate the effectiveness of deep learning features derived from multi-sequence magnetic resonance imaging (MRI) in determining O 6 -methylguanine-DNA methyltransferase (MGMT) promoter methylation status among glioblastoma patients. Methods Clinical, pathological, and MRI data 356 patients (251 methylated, 105 unmethylated) were retrospectively examined public dataset The Cancer Imaging Archive. Each patient underwent preoperative brain scans, which included T1-weighted (T1WI) contrast-enhanced (CE-T1WI). Regions interest (ROIs) delineated identify necrotic tumor core (NCR), enhancing (ET), peritumoral edema (PED). ET NCR regions categorized as intratumoral ROIs, whereas PED region was ROIs. Predictive models developed using Transformer algorithm based on intratumoral, peritumoral, combined features. area under receiver operating characteristic curve (AUC) employed assess predictive performance. Results ROI-based regions, utilizing algorithms MRI, capable predicting MGMT model exhibited superior diagnostic performance relative individual models, achieving an AUC 0.923 (95% confidence interval [CI]: 0.890 – 0.948) stratified cross-validation, with sensitivity specificity 86.45% 87.62%, respectively. Conclusion can effectively distinguish between without methylation.
Language: Английский
Citations
4Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)
Published: Jan. 2, 2025
Cervical cancer is one of the deadliest cancers that pose a significant threat to women's health. Early detection and treatment are commonly used methods prevent cervical cancer. The use pathological image analysis techniques for automatic interpretation cells in slides prominent area research field digital medicine. According Bethesda System, cytology necessitates further classification precancerous lesions based on positive interpretations. However, clinical definitions among different categories lesion complex often characterized by fuzzy boundaries. In addition, pathologists can deduce criteria judgment leading potential confusion during data labeling. Noisy labels due this reason great challenge supervised learning. To address problem caused noisy labels, we propose method label credibility correction cell images network. Firstly, contrastive learning network extract discriminative features from obtain more similar intra-class sample features. Subsequently, these fed into an unsupervised clustering, resulting class labels. Then corresponded true separate confusable typical samples. Through similarity comparison between cluster samples statistical feature centers each class, carried out group Finally, multi-class trained using synergistic grouping method. order enhance stability model, momentum incorporated loss. Experimental validation conducted dataset comprising approximately 60,000 multiple hospitals, showcasing effectiveness our proposed approach. achieves 2-class task accuracy 0.9241 5-class 0.8598. Our better performance than existing networks
Language: Английский
Citations
0Deleted Journal, Journal Year: 2025, Volume and Issue: unknown
Published: Jan. 27, 2025
This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, cataracts, from ophthalmoscopy images. Using balanced subset of 4217 images ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential broader applications medical imaging. Glaucoma, cataracts are common eye diseases that can cause loss if not treated. These must be identified the early stages to prevent damage progression. paper focuses on accurate identification analysis disparate including using Deep (DL) has been widely used image recognition detection treatment diseases. In study, ResNet50, DenseNet121, Inception-ResNetV2, six variations employed, their performance diagnosing such as retinopathy is evaluated. particular, article uses transformer model an automated diagnose highlighting accuracy pre-trained deep (DTL) structures. The updated ViT#5 augmented-regularized (AugReg ViT-L/16_224) rate 0.00002 outperforms state-of-the-art techniques, obtaining data-based score 98.1% publicly accessible dataset, which includes most categories, other convolutional-based models terms precision, recall, F1 score. research contributes significantly analysis, demonstrating AI enhancing precision disease diagnoses advocating integration artificial intelligence diagnostics.
Language: Английский
Citations
0Microscopy Research and Technique, Journal Year: 2025, Volume and Issue: unknown
Published: Feb. 2, 2025
ABSTRACT Pathology‐based decision support systems in clinical settings have faced impediments from data preparation beforehand, large‐scale manual annotations, and poor domain generalization. We report a unified hybrid framework with only raw, slide‐level label images. The method, which we termed PathoCoder, comprises core feature extractors, combiner/reduction, supervised classifier. It is trained (through 5‐fold cross‐validation) on 2452 SurePath cervical liquid‐based whole‐slide captures, provided Mendeley repository. Tests resulted 98.37%, 98.41%, 98.37% accuracy, precision, recall, F 1, respectively. Extensive experiments validate the proposed scheme versatility enough to accommodate epithelial ovarian tumor histotypes. Our method paves way for more accelerated advancements pathology AI by reducing patch/pixel‐based annotation good tissue quality dependency. Its applicability spans diverse classification tasks varying content holds potential real‐world implementation.
Language: Английский
Citations
0Neurocomputing, Journal Year: 2025, Volume and Issue: 633, P. 129771 - 129771
Published: Feb. 26, 2025
Language: Английский
Citations
0Neural Computing and Applications, Journal Year: 2025, Volume and Issue: unknown
Published: March 7, 2025
Abstract Cephalometric analysis is essential for the diagnosis and treatment planning of orthodontics. In lateral cephalograms, however, manual detection anatomical landmarks a time-consuming procedure. Deep learning solutions hold potential to address time constraints associated with certain tasks; concerns regarding their performances have been observed. To this critical issue, we propose an end-to-end cascaded deep framework (Self-CephaloNet) task, which demonstrates benchmark performance over ISBI 2015 dataset in predicting 19 cephalometric landmarks. Due adaptive nodal capabilities, Self-ONN (self-operational neural networks) superior complex feature spaces conventional convolutional networks. leverage attribute, introduce novel self-bottleneck HRNetV2 (high-resolution network) backbone, has exhibited on our landmark task. Our first-stage result surpasses previous studies, showcasing efficacy singular model, achieves remarkable 70.95% success rate detecting within 2-mm range Test1 Test2 datasets are part dataset. Moreover, second stage significantly improves overall performance, yielding impressive 82.25% average above same distance. Furthermore, external validation conducted using PKU cephalogram model commendable 75.95% range.
Language: Английский
Citations
0