Enhancing fetal ultrasound image quality and anatomical plane recognition in low-resource settings using super-resolution models DOI Creative Commons

Hafida Boumeridja,

Mohammed Ammar, Mahmood Alzubaidi

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: March 11, 2025

Super-resolution (SR) techniques present a suitable solution to increase the image resolution acquired using an ultrasound device characterized by low resolution. This can be particularly beneficial in low-resource imaging settings. work surveys advanced SR applied enhance and quality of fetal images, focusing Dual back-projection based internal learning (DBPISR) technique, which utilizes for blind super-resolution, as opposed super-resolution generative adversarial network (BSRGAN), real-world enhanced (Real-ESRGAN), swin transformer restoration (SwinIR) SwinIR-Large. The dual approach enhances iteratively refining downscaling processes through training method, achieving high accuracy kernel estimation reconstruction. Real-ESRGAN uses synthetic data simulate complex degradations, incorporating U-shaped (U-Net) discriminator improve stability visual performance. BSRGAN addresses limitations traditional degradation models introducing realistic comprehensive process involving blur, downsampling, noise, leading superior Swin (SwinIR SwinIR_large) employ Transformer architecture restoration, excelling capturing long-range dependencies structures, resulting outstanding performance PSNR, SSIM, NIQE, BRISQUE metrics. tested sourced from five developing countries often lower quality, enabled us show that these approaches help images. Evaluations on images reveal methods significantly with DBPISR, Real-ESRGAN, BSRGAN, SwinIR, SwinIR-Large showing notable improvements PSNR thereby highlighting their potential improving diagnostic utility We evaluated aforementioned Super-Resolution models, analyzing impact both classification tasks. Our findings indicate hold great enhancing evaluation medical development countries. Among tested, consistently accuracy, even when challenged limited variable datasets. finding was further supported deploying ConvNext-base classifier, demonstrated improved super-resolved Real-ESRGAN's capacity turn, highlights its address resource constraints encountered

Language: Английский

Biologically interpretable multi-task deep learning pipeline predicts molecular alterations, grade, and prognosis in glioma patients DOI Creative Commons

Xuewei Wu,

Shuaitong Zhang, Zhenyu Zhang

et al.

npj Precision Oncology, Journal Year: 2024, Volume and Issue: 8(1)

Published: Aug. 16, 2024

Deep learning models have been developed for various predictions in glioma; yet, they were constrained by manual segmentation, task-specific design, or a lack of biological interpretation. Herein, we aimed to develop an end-to-end multi-task deep (MDL) pipeline that can simultaneously predict molecular alterations and histological grade (auxiliary tasks), as well prognosis (primary task) gliomas. Further, provide the mechanisms underlying model's predictions. We collected multiscale data including baseline MRI images from 2776 glioma patients across two private (FAHZU HPPH, n = 1931) three public datasets (TCGA, 213; UCSF, 410; EGD, 222). trained internally validated MDL model using our datasets, externally it datasets. used model-predicted score (DPS) stratify into low-DPS high-DPS subtypes. Additionally, radio-multiomics analysis was conducted elucidate basis DPS. In external validation cohorts, achieved average areas under curve 0.892–0.903, 0.710–0.894, 0.850–0.879 predicting IDH mutation status, 1p/19q co-deletion tumor grade, respectively. Moreover, yielded C-index 0.723 TCGA 0.671 UCSF prediction overall survival. The DPS exhibits significant correlations with activated oncogenic pathways, immune infiltration patterns, specific protein expression, DNA methylation, burden, tumor-stroma ratio. Accordingly, work presents accurate biologically meaningful tool subtypes, survival outcomes gliomas, which provides personalized clinical decision-making global non-invasive manner.

Language: Английский

Citations

6

Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: methods, applications and limitations DOI
Dildar Hussain, Mohammed A. Al‐masni, Muhammad Aslam

et al.

Journal of X-Ray Science and Technology, Journal Year: 2024, Volume and Issue: 32(4), P. 857 - 911

Published: April 30, 2024

The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal imaging (MMI) gaining recognition for its precision diagnosis, treatment, progression tracking.

Language: Английский

Citations

5

CHSNet: Automatic lesion segmentation network guided by CT image features for acute cerebral hemorrhage DOI
Bohao Xu, Yingwei Fan,

Jingming Liu

et al.

Computers in Biology and Medicine, Journal Year: 2023, Volume and Issue: 164, P. 107334 - 107334

Published: Aug. 8, 2023

Language: Английский

Citations

12

Chest radiology report generation based on cross-modal multi-scale feature fusion DOI Creative Commons
Yu Pan, Lijun Liu,

Yang Xiaobing

et al.

Journal of Radiation Research and Applied Sciences, Journal Year: 2024, Volume and Issue: 17(1), P. 100823 - 100823

Published: Jan. 14, 2024

Chest radiology imaging plays a crucial role in the early screening, diagnosis, and treatment of chest diseases. The accurate interpretation radiological images automatic generation reports not only save doctor's time but also mitigate risk errors diagnosis. core objective report is to achieve precise mapping visual features lesion descriptions at multi-scale fine-grained levels. Existing methods typically combine global textual generate reports. However, these approaches may ignore key areas lack sensitivity location information. Furthermore, achieving characterization alignment medical text proves challenging, leading reduction quality generation. Addressing issues, we propose method for based on cross-modal feature fusion. First, an auxiliary labeling module designed guide model focus region image. Second, channel attention network employed enhance information disease features. Finally, fusion constructed by combining memory matrices, facilitating between reporting corresponding scales. proposed experimentally evaluated two publicly available image datasets. results demonstrate superior performance BLEU ROUGE metrics compared existing methods. Particularly, there are improvements 4.8% metric 9.4% METEOR IU X-Ray dataset. Moreover, 7.4% enhancement BLEU-1 7.6% improvement BLEU-2 MIMIC-CXR

Language: Английский

Citations

4

Ten deep learning techniques to address small data problems with remote sensing DOI Creative Commons
Anastasiia Safonova, Gohar Ghazaryan, Stefan Stiller

et al.

EarthArXiv (California Digital Library), Journal Year: 2023, Volume and Issue: unknown

Published: June 9, 2023

Researchers and engineers have increasingly used Deep Learning (DL) for a variety of Remote Sensing (RS) tasks. However, data from local observations or via ground truth is often quite limited training DL models, especially when these models represent key socio-environmental problems, such as the monitoring extreme, destructive climate events, biodiversity, sudden changes in ecosystem states. Such cases, also known small pose significant methodological challenges. This review summarises challenges RS domain possibility using emerging techniques to overcome them. We show that problem common challenge across disciplines scales results poor model generalisability transferability. then introduce an overview ten promising techniques: transfer learning, self-supervised semi-supervised few-shot zero-shot active weakly supervised multitask process-aware ensemble learning; we include validation technique spatial k-fold cross validation. Our particular contribution was develop flowchart helps users select which use given by answering few questions. hope our article facilitate applications tackle societally important environmental problems with reference data.

Language: Английский

Citations

11

A multi-task learning model for fast prediction of mechanical behavior of UD-CFRP composites under transverse tension DOI

Yan Huai,

Weihua Xie,

Bo Gao

et al.

Composite Structures, Journal Year: 2023, Volume and Issue: 324, P. 117555 - 117555

Published: Sept. 9, 2023

Language: Английский

Citations

11

An Adaptation of Hybrid Binary Optimization Algorithms for Medical Image Feature Selection in Neural Network for Classification of Breast Cancer DOI
Olaide N. Oyelade, Enesi Femi Aminu, Hui Wang

et al.

Neurocomputing, Journal Year: 2024, Volume and Issue: unknown, P. 129018 - 129018

Published: Nov. 1, 2024

Language: Английский

Citations

4

VSNet: Vessel Structure-aware Network for hepatic and portal vein segmentation DOI

Jichen Xu,

Anqi Dong, Yang Yang

et al.

Medical Image Analysis, Journal Year: 2025, Volume and Issue: 101, P. 103458 - 103458

Published: Jan. 16, 2025

Language: Английский

Citations

0

Domain shift, domain adaptation, and generalization DOI
Jonas Richiardi, Veronica Ravano, Nataliia Molchanova

et al.

Elsevier eBooks, Journal Year: 2025, Volume and Issue: unknown, P. 127 - 151

Published: Jan. 1, 2025

Language: Английский

Citations

0

Development of a diagnostic classification model for lateral cephalograms based on multitask learning DOI Creative Commons
Qiao Chang, Shaofeng Wang, Fan Wang

et al.

BMC Oral Health, Journal Year: 2025, Volume and Issue: 25(1)

Published: Feb. 15, 2025

This study aimed to develop a cephalometric classification method based on multitask learning for eight diagnostic classifications. was retrospective. A total of 3,310 lateral cephalograms were collected construct dataset. Eight clinical classifications employed, including sagittal and vertical skeletal facial patterns, maxillary mandibular anteroposterior positions, inclinations upper lower incisors, as well their positions. The images manually annotated initially classification, which verified by senior orthodontists. data randomly divided into training, validation, test sets at ratio approximately 8:1:1. model constructed the ResNeXt50_32 × 4d network consisted shared layers task-specific layers. performance evaluated using accuracy, precision, sensitivity, specificity area under curve (AUC). could perform within an average 0.0096 s. accuracy six 0.8–0.9, two 0.75-0.8. overall AUC values each exceeded 0.9. An automatic established achieve simultaneous common items. achieved better reduced computational costs, providing novel perspective reference addressing such problems.

Language: Английский

Citations

0