Multimodal deep learning fusion of ultrafast-DCE MRI and clinical information for breast lesion classification DOI Creative Commons

Belinda Lokaj,

Valentin Durand de Gevigney,

Dahila-Amal Djema

et al.

Computers in Biology and Medicine, Journal Year: 2025, Volume and Issue: 188, P. 109721 - 109721

Published: Feb. 19, 2025

Breast cancer is the most common worldwide, and magnetic resonance imaging (MRI) constitutes a very sensitive technique for invasive detection. When reviewing breast MRI examination, clinical radiologists rely on multimodal information, composed of data but also information not present in images such as information. Most machine learning (ML) approaches are well suited data. However, attention-based architectures, Transformers, flexible therefore good candidates integrating The aim this study was to develop evaluate novel deep (DL) model combining ultrafast dynamic contrast-enhanced (UF-DCE) images, lesion characteristics classification. From 2019 2023, UF-DCE radiology reports 240 patients were retrospectively collected from single center annotated. Imaging constituted volumes interest (VOI) extracted around segmented lesions. Non-imaging both (categorical) geometrical (scalar) Clinical annotated associated their corresponding We compared diagnostic performances traditional ML methods non-imaging data, an image based DL architecture, Transformer-based Multimodal Sieve Transformer with Vision encoder (MMST-V). final dataset included 987 lesions (280 benign, 121 malignant lesions, 586 benign lymph nodes) 1081 reports. For classification scalar had greater influence (Area under receiver operating characteristic curve (AUROC) = 0.875 ± 0.042) than categorical (AUROC 0.680 0.060). MMST-V achieved better 0.928 0.027) 0.900 0.045), only 0.863 0.025). proposed adaptative approach that can consider redundant provided by It demonstrated unimodal methods. Results highlight combination patient detailed additional knowledge enhances MRI.

Language: Английский

GAN review: Models and medical image fusion applications DOI
Tao Zhou, Qi Li,

Huiling Lu

et al.

Information Fusion, Journal Year: 2022, Volume and Issue: 91, P. 134 - 148

Published: Oct. 20, 2022

Language: Английский

Citations

134

Deep learning methods for medical image fusion: A review DOI
Tao Zhou,

Qianru Cheng,

Huiling Lu

et al.

Computers in Biology and Medicine, Journal Year: 2023, Volume and Issue: 160, P. 106959 - 106959

Published: April 20, 2023

Language: Английский

Citations

70

Alzheimer Disease Classification through Transfer Learning Approach DOI Creative Commons

Noman Raza,

Asma Naseer, Maria Tamoor

et al.

Diagnostics, Journal Year: 2023, Volume and Issue: 13(4), P. 801 - 801

Published: Feb. 20, 2023

Alzheimer’s disease (AD) is a slow neurological disorder that destroys the thought process, and consciousness, of human. It directly affects development mental ability neurocognitive functionality. The number patients with increasing day by day, especially in old aged people, who are above 60 years age, and, gradually, it becomes cause their death. In this research, we discuss segmentation classification Magnetic resonance imaging (MRI) disease, through concept transfer learning customizing convolutional neural network (CNN) specifically using images segmented Gray Matter (GM) brain. Instead training computing proposed model accuracy from start, used pre-trained deep as our base model, after that, was applied. tested over different epochs, 10, 25, 50. overall 97.84%.

Language: Английский

Citations

44

An efficient hybrid Deep Learning-Machine Learning method for diagnosing neurodegenerative disorders. DOI Open Access

Johnsymol Joy,

Mercy Paul Selvan

International Journal of Computational and Experimental Science and Engineering, Journal Year: 2025, Volume and Issue: 11(1)

Published: Jan. 4, 2025

A neurodegenerative illness known as Alzheimer's causes the loss of brain cells and progressive atrophy tissue. It badly affects a person’s normal life. However, if we are able to detect it early treat it, most patients will be recover some degree lead life with dependence. Continuous clinical assessment is needed for diagnosing this type disorder. Medical diagnosis today extensively relies on deep learning approaches. medical image data analysis has lot constraints. One major constraints faced during scarcity imbalance. In light these concerns, current study sets out create hybrid model that can effectively categorise various disease variants using magnetic resonance imaging (MRI) data. For solving imbalance, first, blur sharpen all images, finally, pass images along original through predefined CNN (Convolutional Neural Network) was trained mnist weight extracting features, then features an extra-tree classifier feature reduction, finally input reduced customised model. This work used different pre-trained models our DNN (Deep compared those cutting-edge chosen base The results state proposed model, which ResNet dropout concept, got highest values training accuracy (98.20) validation (92.61). also addresses problem overfitting.

Language: Английский

Citations

19

Leveraging U-Net and selective feature extraction for land cover classification using remote sensing imagery DOI Creative Commons
Leo Ramos, Ángel D. Sappa

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Jan. 4, 2025

In this study, we explore an enhancement to the U-Net architecture by integrating SK-ResNeXt as encoder for Land Cover Classification (LCC) tasks using Multispectral Imaging (MSI). introduces cardinality and adaptive kernel sizes, allowing better capture multi-scale features adjust more effectively variations in spatial resolution, thereby enhancing model's ability segment complex land cover types. We evaluate approach Five-Billion-Pixels dataset, composed of 150 large-scale RGB-NIR images over 5 billion labeled pixels across 24 categories. The achieves notable improvements baseline U-Net, with gains 5.312% Overall Accuracy (OA) 8.906% mean Intersection Union (mIoU) when RGB configuration. With RG-NIR configuration, these increase 6.928% OA 6.938% mIoU, while configuration yields 5.854% 7.794% mIoU. Furthermore, not only outperforms other well-established models such DeepLabV3, DeepLabV3+, Ma-Net, SegFormer, PSPNet, particularly but also surpasses recent state-of-the-art methods. Visual tests confirmed superiority, showing that studied certain classes, lakes, rivers, industrial areas, residential vegetation, where architectures struggled achieve accurate segmentation. These results demonstrate potential capability explored handle MSI enhance LCC results.

Language: Английский

Citations

2

Transparency in Diagnosis: Unveiling the Power of Deep Learning and Explainable AI for Medical Image Interpretation DOI
Priya Garg, Meenakshi Sharma,

Parteek Kumar

et al.

Arabian Journal for Science and Engineering, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 22, 2025

Language: Английский

Citations

2

Dilated SE-DenseNet for brain tumor MRI classification DOI Creative Commons
Yu M, Jiwook Kim,

Lena Podina

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Jan. 28, 2025

In the field of medical imaging, particularly MRI-based brain tumor classification, we propose an advanced convolutional neural network (CNN) leveraging DenseNet-121 architecture, enhanced with dilated layers and Squeeze-and-Excitation (SE) networks' attention mechanisms. This novel approach aims to improve upon state-of-the-art methods identification. Our model, trained evaluated on a comprehensive Kaggle dataset, demonstrated superior performance over established convolution-based transformer-based models: ResNet-101, VGG-19, original DenseNet-121, MobileNet-V2, ViT-L/16, Swin-B across key metrics: F1-score, accuracy, precision, recall. The results underscore effectiveness our architectural enhancements in image analysis. Future research directions include optimizing dilation exploring various configurations. study highlights significant role machine learning improving diagnostic accuracy potential applications extending beyond detection other imaging tasks.

Language: Английский

Citations

2

COVLIAS 2.0-cXAI: Cloud-Based Explainable Deep Learning System for COVID-19 Lesion Localization in Computed Tomography Scans DOI Creative Commons
Jasjit S. Suri, Sushant Agarwal, Gian Luca Chabert

et al.

Diagnostics, Journal Year: 2022, Volume and Issue: 12(6), P. 1482 - 1482

Published: June 16, 2022

Background: The previous COVID-19 lung diagnosis system lacks both scientific validation and the role of explainable artificial intelligence (AI) for understanding lesion localization. This study presents a cloud-based AI, “COVLIAS 2.0-cXAI” using four kinds class activation maps (CAM) models. Methodology: Our cohort consisted ~6000 CT slices from two sources (Croatia, 80 patients Italy, 15 control patients). COVLIAS 2.0-cXAI design three stages: (i) automated segmentation hybrid deep learning ResNet-UNet model by automatic adjustment Hounsfield units, hyperparameter optimization, parallel distributed training, (ii) classification DenseNet (DN) models (DN-121, DN-169, DN-201), (iii) CAM visualization techniques: gradient-weighted mapping (Grad-CAM), Grad-CAM++, score-weighted (Score-CAM), FasterScore-CAM. was validated trained senior radiologists its stability reliability. Friedman test also performed on scores radiologists. Results: resulted in dice similarity 0.96, Jaccard index 0.93, correlation coefficient 0.99, with figure-of-merit 95.99%, while classifier accuracies DN nets DN-201) were 98%, 99% loss ~0.003, ~0.0025, ~0.002 50 epochs, respectively. mean AUC all 0.99 (p < 0.0001). showed 80% scans alignment (MAI) between heatmaps gold standard, score out five, establishing clinical settings. Conclusions: successfully AI localization scans.

Language: Английский

Citations

51

A DenseNet CNN-based liver lesion prediction and classification for future medical diagnosis DOI Creative Commons
Nelaturi Nanda Prakash, V. Rajesh,

Dumisani Lickson Namakhwa

et al.

Scientific African, Journal Year: 2023, Volume and Issue: 20, P. e01629 - e01629

Published: March 11, 2023

Liver disease diagnosis is a major medical challenge in developing nations. Every year around 30 billion people face liver failure issues resulting their death. The past abnormality detection models have faced less accuracy and high theory of constraint metrics. lesion on the hasn't been identified clearly with earlier models, so an advanced, efficient, effective essential. To overcome limitations existing this approach proposes deep DenseNet convolutional neural network (CNN) based learning technique. This work collected Computed Tomography (CT) scan images from Kaggle dataset for training initial stage. pre-processing has performed region-growing segmentation, through CNN. real-time test are Government General Hospital Vijayawada (10,000 samples), verified proposed CNN to diagnose whether input lesion. Finally, results obtained derived confusion matrix summarizes performance methodology following metrics at 98.34%, sensitivity 99.72%, recall 97.84%, throughput 98.43% rate 93.41%. comparison reveals that technique attains more outperforms other pioneer methodologies.

Language: Английский

Citations

35

Deep learning in computed tomography pulmonary angiography imaging: A dual-pronged approach for pulmonary embolism detection DOI Creative Commons
Fabiha Bushra, Muhammad E. H. Chowdhury, Rusab Sarmun

et al.

Expert Systems with Applications, Journal Year: 2024, Volume and Issue: 245, P. 123029 - 123029

Published: Jan. 4, 2024

The increasing reliance on Computed Tomography Pulmonary Angiography (CTPA) for Embolism (PE) diagnosis presents challenges and a pressing need improved diagnostic solutions. primary objective of this study is to leverage deep learning techniques enhance the Computer Assisted Diagnosis (CAD) PE. With aim, we propose classifier-guided detection approach that effectively leverages classifier's probabilistic inference direct predictions, marking novel contribution in domain automated PE diagnosis. Our classification system includes an Attention-Guided Convolutional Neural Network (AG-CNN) uses local context by employing attention mechanism. This emulates human expert's looking at both global appearances lesion regions before making decision. classifier demonstrates robust performance FUMPE dataset, achieving AUROC 0.927, sensitivity 0.862, specificity 0.879, F1-score 0.805 with Inception-v3 backbone architecture. Moreover, AG-CNN outperforms baseline DenseNet-121 model, 8.1% gain. While previous research has mostly focused finding main arteries, our use cutting-edge object models ensembling greatly improves accuracy detecting small embolisms peripheral arteries. Finally, proposed further refines metrics, contributing new state-of-the-art community: mAP50, sensitivity, 0.846, 0.901, 0.779, respectively, outperforming former benchmark significant 3.7% improvement mAP50. aims elevate patient care integrating AI solutions into clinical workflows, highlighting potential human-AI collaboration medical diagnostics.

Language: Английский

Citations

9