DDCNN-F: double decker convolutional neural network 'F' feature fusion as a medical image classification framework DOI Creative Commons
Nirmala Veeramani, Premaladha Jayaraman, R. Krishankumar

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Jan. 5, 2024

Abstract Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide new feature fusion framework for melanoma classification includes novel ‘F’ Flag early detection. indicator efficiently distinguishes benign lesions from malignant ones known as melanoma. The article proposes an architecture built in Double Decker Convolutional Neural Network called DDCNN future fusion. network's deck one, (CNN), finds difficult-to-classify hairy images using confidence factor termed the intra-class variance score. These hirsute image samples are combined form Baseline Separated Channel (BSC). By eliminating hair and data augmentation techniques, BSC ready analysis. second trains pre-processed generates bottleneck features. features merged with generated ABCDE clinical bio indicators promote accuracy. Different types of classifiers fed resulting hybrid fused 'F' feature. proposed system was trained ISIC 2019 2020 datasets assess its performance. empirical findings expose strategy exposing achieved specificity 98.4%, accuracy 93.75%, precision 98.56%, Area Under Curve (AUC) value 0.98. approach can accurately identify diagnose fatal outperform other state-of-the-art which attributed Feature framework. Also, this research ascertained improvements several when utilising indicator, highest + 7.34%.

Language: Английский

Segment anything in medical images DOI Creative Commons
Jun Ma, Yuting He, Feifei Li

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: Jan. 22, 2024

Language: Английский

Citations

729

Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation DOI Creative Commons
Michael Yeung, Evis Sala, Carola‐Bibiane Schönlieb

et al.

Computerized Medical Imaging and Graphics, Journal Year: 2021, Volume and Issue: 95, P. 102026 - 102026

Published: Dec. 13, 2021

Automatic segmentation methods are an important advancement in medical image analysis. Machine learning techniques, and deep neural networks particular, the state-of-the-art for most tasks. Issues with class imbalance pose a significant challenge datasets, lesions often occupying considerably smaller volume relative to background. Loss functions used training of algorithms differ their robustness imbalance, direct consequences model convergence. The commonly loss based on either cross entropy loss, Dice or combination two. We propose Unified Focal new hierarchical framework that generalises entropy-based losses handling imbalance. evaluate our proposed function five publicly available, imbalanced imaging datasets: CVC-ClinicDB, Digital Retinal Images Vessel Extraction (DRIVE), Breast Ultrasound 2017 (BUS2017), Brain Tumour Segmentation 2020 (BraTS20) Kidney 2019 (KiTS19). compare performance against six functions, across 2D binary, 3D binary multiclass tasks, demonstrating is robust consistently outperforms other functions. Source code available at: https://github.com/mlyg/unified-focal-loss.

Language: Английский

Citations

377

AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem? DOI
Jun Ma, Yao Zhang, Song Gu

et al.

IEEE Transactions on Pattern Analysis and Machine Intelligence, Journal Year: 2021, Volume and Issue: 44(10), P. 6695 - 6714

Published: July 27, 2021

With the unprecedented developments in deep learning, automatic segmentation of main abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have achieved comparable results with inter-rater variability on many benchmark datasets. However, most existing datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether excellent performance can generalize diverse This paper presents large CT organ dataset, termed AbdomenCT-1K, more than 1000 (1K) scans from 12 medical centers, including multi-phase, multi-vendor, multi-disease cases. Furthermore, we conduct large-scale study for liver, kidney, spleen, pancreas reveal unsolved problems SOTA methods, such limited generalization ability distinct phases, unseen diseases. To advance problems, further build four benchmarks fully supervised, semi-supervised, weakly continual which are currently challenging active research topics. Accordingly, develop simple effective method each benchmark, used out-of-the-box strong baselines. We believe AbdomenCT-1K dataset will promote future in-depth towards clinical applicable methods.

Language: Английский

Citations

242

Accuracy Assessment in Convolutional Neural Network-Based Deep Learning Remote Sensing Studies—Part 1: Literature Review DOI Creative Commons
Aaron E. Maxwell, Timothy A. Warner, Luis Andrés Guillén

et al.

Remote Sensing, Journal Year: 2021, Volume and Issue: 13(13), P. 2450 - 2450

Published: June 23, 2021

Convolutional neural network (CNN)-based deep learning (DL) is a powerful, recently developed image classification approach. With origins in the computer vision and processing communities, accuracy assessment methods for CNN-based DL use wide range of metrics that may be unfamiliar to remote sensing (RS) community. To explore differences between traditional RS methods, we surveyed random selection 100 papers from literature. The results show studies have largely abandoned terminology, though some measures typically used papers, most notably precision recall, direct equivalents terminology. Some terms multiple names, or are equivalent another measure. In our sample, only rarely reported complete confusion matrix, when they did so, it was even more rare matrix estimated population properties. On other hand, increasingly paying attention role class prevalence designing approaches. evaluate decision boundary threshold over values tend precision-recall (P-R) curve, associated area under curve (AUC) average (AP) mean (mAP), rather than receiver operating characteristic (ROC) its AUC. also notable testing generalization their models on entirely new datasets, including data areas, acquisition times, sensors.

Language: Английский

Citations

202

Semi-supervised medical image segmentation via uncertainty rectified pyramid consistency DOI
Xiangde Luo, Guotai Wang, Wenjun Liao

et al.

Medical Image Analysis, Journal Year: 2022, Volume and Issue: 80, P. 102517 - 102517

Published: June 15, 2022

Language: Английский

Citations

195

An attention-based U-Net for detecting deforestation within satellite sensor imagery DOI Creative Commons

David John,

Ce Zhang

International Journal of Applied Earth Observation and Geoinformation, Journal Year: 2022, Volume and Issue: 107, P. 102685 - 102685

Published: Jan. 18, 2022

In this paper, we implement and analyse an Attention U-Net deep network for semantic segmentation using Sentinel-2 satellite sensor imagery, the purpose of detecting deforestation within two forest biomes in South America, Amazon Rainforest Atlantic Forest. The performance is compared with U-Net, Residual ResNet50-SegNet FCN32-VGG16 across three different datasets (three-band Amazon, four-band Forest). Results indicate that provides best masks when tested on each dataset, achieving average pixel-wise F1-scores 0.9550, 0.9769 0.9461 respectively. Mask reproductions from classifier were also analysed, showing to ground reference could detect non-forest polygons more accurately than overall it most accurate forest/deforest benchmark approaches despite its reduced complexity training time, thus being first application important task. This paper concludes a brief discussion ability attention mechanism offset as well ideas further research into optimising architecture applying mechanisms other architectures detection. Our code available at https://github.com/davej23/attention-mechanism-unet.

Language: Английский

Citations

88

A survey on deep learning for skin lesion segmentation DOI Creative Commons
Zahra Mirikharaji, Kumar Abhishek, Alceu Bissoto

et al.

Medical Image Analysis, Journal Year: 2023, Volume and Issue: 88, P. 102863 - 102863

Published: June 10, 2023

Language: Английский

Citations

81

Current and Emerging Trends in Medical Image Segmentation With Deep Learning DOI Open Access
Pierre-Henri Conze, Gustavo Andrade-Miranda, Vivek Kumar Singh

et al.

IEEE Transactions on Radiation and Plasma Medical Sciences, Journal Year: 2023, Volume and Issue: 7(6), P. 545 - 569

Published: April 10, 2023

In recent years, the segmentation of anatomical or pathological structures using deep learning has experienced a widespread interest in medical image analysis. Remarkably successful performance been reported many imaging modalities and for variety clinical contexts to support clinicians computer-assisted diagnosis, therapy, surgical planning purposes. However, despite increasing amount challenges, there remains little consensus on which methodology performs best. Therefore, we examine this article numerous developments breakthroughs brought since rise U-Net-inspired architectures. Especially, focus technical challenges emerging trends that community is now focusing on, including conditional generative adversarial cascaded networks, Transformers, contrastive learning, knowledge distillation, active prior embedding, cross-modality multistructure analysis, federated semi-supervised self-supervised paradigms. We also suggest possible avenues be further investigated future research efforts.

Language: Английский

Citations

56

Unleashing the strengths of unlabelled data in deep learning-assisted pan-cancer abdominal organ quantification: the FLARE22 challenge DOI Creative Commons
Jun Ma, Yao Zhang, Song Gu

et al.

The Lancet Digital Health, Journal Year: 2024, Volume and Issue: 6(11), P. e815 - e826

Published: Oct. 23, 2024

Language: Английский

Citations

41

A survey on deep learning in medical image registration: New technologies, uncertainty, evaluation metrics, and beyond DOI
Junyu Chen, Yihao Liu, Shuwen Wei

et al.

Medical Image Analysis, Journal Year: 2024, Volume and Issue: 100, P. 103385 - 103385

Published: Nov. 10, 2024

Language: Английский

Citations

17