Detection of Skin Cancer Types in Dermoscopy Images with Gradient Boosting DOI

Leelkanth Dewangan,

Kirti Gupta, Virendra Kumar Swarnkar

et al.

Published: Dec. 29, 2023

This study develops a computer-based system for classifying using gradient booster algorithms that addresses the urgent demand precise and prompt skin cancer diagnosis. Dermoscopy pictures were gathered prepared extraction of features an interpretivist strategy. A large dataset with variety lesions was used testing training purposes. The amplifier model performed better than other models at different types cancer, F1-score 0.92 as well accuracy 94.5%. effectiveness suggested technique underlined by comparison initial models. graphical representation classification findings, including confusion matrix ROC curves, gave intuitive understandings model's discriminatory abilities. An examination feature importance indicated crucial characteristics influencing accurate classification. Future research is advised to investigate ensemble methods, incorporate multisensory data sources, perform real-time therapeutic validations. highlights possibilities potential be important tool in skincare, enhancing patient care providing early management identification

Language: Английский

A precise model for skin cancer diagnosis using hybrid U-Net and improved MobileNet-V3 with hyperparameters optimization DOI Creative Commons
Umesh Kumar Lilhore, Sarita Simaiya, Yogesh Kumar Sharma

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Feb. 21, 2024

Abstract Skin cancer is a frequently occurring and possibly deadly disease that necessitates prompt precise diagnosis in order to ensure efficacious treatment. This paper introduces an innovative approach for accurately identifying skin by utilizing Convolution Neural Network architecture optimizing hyperparameters. The proposed aims increase the precision efficacy of recognition consequently enhance patients' experiences. investigation tackle various significant challenges recognition, encompassing feature extraction, model design, utilizes advanced deep-learning methodologies extract complex features patterns from images. We learning procedure deep integrating Standard U-Net Improved MobileNet-V3 with optimization techniques, allowing differentiate malignant benign cancers. Also substituted crossed-entropy loss function Mobilenet-v3 mathematical framework bias accuracy. model's squeeze excitation component was replaced practical channel attention achieve parameter reduction. Integrating cross-layer connections among Mobile modules has been leverage synthetic effectively. dilated convolutions were incorporated into receptive field. hyperparameters utmost importance improving efficiency models. To fine-tune hyperparameter, we employ sophisticated methods such as Bayesian method using pre-trained CNN MobileNet-V3. compared existing models, i.e., MobileNet, VGG-16, MobileNet-V2, Resnet-152v2 VGG-19 on “HAM-10000 Melanoma Cancer dataset". empirical findings illustrate optimized hybrid outperforms detection segmentation techniques based high 97.84%, sensitivity 96.35%, accuracy 98.86% specificity 97.32%. enhanced performance this research resulted timelier more diagnoses, potentially contributing life-saving outcomes mitigating healthcare expenditures.

Language: Английский

Citations

26

Multi-scale feature fusion of deep convolutional neural networks on cancerous tumor detection and classification using biomedical images DOI Creative Commons
U. M. Prakash, S. Iniyan, Ashit Kumar Dutta

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Jan. 7, 2025

In the present scenario, cancerous tumours are common in humans due to major changes nearby environments. Skin cancer is a considerable disease detected among people. This uncontrolled evolution of atypical skin cells. It occurs when DNA injury cells, or genetic defect, leads an increase quickly and establishes malignant tumors. However, rare instances, many types occur from tempted by infrared light affecting worldwide health problem, so accurate appropriate diagnosis needed for efficient treatment. Current developments medical technology, like smart recognition analysis utilizing machine learning (ML) deep (DL) techniques, have transformed treatment these conditions. These approaches will be highly effective biomedical imaging. study develops Multi-scale Feature Fusion Deep Convolutional Neural Networks on Cancerous Tumor Detection Classification (MFFDCNN-CTDC) model. The main aim MFFDCNN-CTDC model detect classify using To eliminate unwanted noise, method initially utilizes sobel filter (SF) image preprocessing stage. For segmentation process, Unet3+ employed, providing precise localization tumour regions. Next, incorporates multi-scale feature fusion combining ResNet50 EfficientNet architectures, capitalizing their complementary strengths extraction varying depths scales input images. convolutional autoencoder (CAE) utilized classification method. Finally, parameter tuning process performed through hybrid fireworks whale optimization algorithm (FWWOA) enhance performance CAE A wide range experiments authorize approach. experimental validation approach exhibited superior accuracy value 98.78% 99.02% over existing techniques under ISIC 2017 HAM10000 datasets.

Language: Английский

Citations

2

Limb salvage prediction in peripheral artery disease patients using angiographic computer vision DOI
Yury Rusinovich,

Vitalii Liashko,

Volha Rusinovich

et al.

Vascular, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 3, 2025

Background Peripheral artery disease (PAD) outcomes often rely on the expertise of individual vascular units, introducing potential subjectivity into staging. This retrospective, multicenter cohort study aimed to demonstrate ability artificial intelligence (AI) provide staging based inter-institutional by predicting limb in post-interventional pedal angiograms PAD patients, specifically comparison inframalleolar modifier Global Limb Anatomic Staging System (IM GLASS). Methods We used computer vision (CV) MobileNetV2 model, implemented via TensorFlow.js library, for transfer learning and feature extraction from 518 patients with known 3-month outcomes: 218 salvaged limbs, 140 minor amputations, 160 major amputations. Results After 43 epochs training a rate 0.001 batch size 16, model achieved validation accuracy 95% test 93% differentiating limbs In manual testing 45 excluded training, validation, processes, AI predicted mean salvage probabilities 96% actual 27% 17% amputations ( p-value < .001). The correlation coefficient between CV model-predicted outcome these was 0.7, nearly five times higher than that IM GLASS pattern (0.14). Conclusion Computer can analyze predict outcomes, demonstrating significant rates, outperforming segmentation specialist. It has immediate precise treatment results during interventions, tailored (inter)institutional expertise, enhance individualized decision-making.

Language: Английский

Citations

1

Next-Generation Diagnostics: The Impact of Synthetic Data Generation on the Detection of Breast Cancer from Ultrasound Imaging DOI Creative Commons
Hari Mohan, Serhii Dashkevych, Joon Yoo

et al.

Mathematics, Journal Year: 2024, Volume and Issue: 12(18), P. 2808 - 2808

Published: Sept. 11, 2024

Breast cancer is one of the most lethal and widespread diseases affecting women worldwide. As a result, it necessary to diagnose breast accurately efficiently utilizing cost-effective widely used methods. In this research, we demonstrated that synthetically created high-quality ultrasound data outperformed conventional augmentation strategies for diagnosing using deep learning. We trained deep-learning model EfficientNet-B7 architecture large dataset 3186 images acquired from multiple publicly available sources, as well 10,000 generated generative adversarial networks (StyleGAN3). The was five-fold cross-validation techniques validated four metrics: accuracy, recall, precision, F1 score measure. results showed integrating produced into training set increased classification accuracy 88.72% 92.01% based on score, demonstrating power models expand improve quality datasets in medical-imaging applications. This larger comprising synthetic significantly improved its performance by more than 3% over genuine with common augmentation. Various procedures were also investigated set’s diversity representativeness. research emphasizes relevance modern artificial intelligence machine-learning technologies medical imaging providing an effective strategy categorizing images, which may lead diagnostic optimal treatment options. proposed are highly promising have strong potential future clinical application diagnosis cancer.

Language: Английский

Citations

4

Improving Endoscopic Image Analysis: Attention Mechanism Integration in Grid Search Fine-tuned Transfer Learning Model for Multi-class Gastrointestinal Disease Classification DOI Creative Commons
M. A. Elmagzoub,

Swapandeep Kaur,

Sheifali Gupta

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 80345 - 80358

Published: Jan. 1, 2024

Due to a continuous change in people's lifestyle and dietary habits, gastrointestinal diseases are on the increase, with changes being major contributor variety of bowel problems. Around two million people around world die due (GI) diseases. Endoscopy is medical imaging technology helpful diagnosing like polyps esophagitis. Its manual diagnosis time-consuming; hence, computer-aided techniques now widely used for accurate fast GI disease diagnosis. In this paper, Kvasir dataset 4000 endoscopic images, comprising 500 images each eight tract classes have been classified using seven grid search fine-tuned transfer learning models. The models employed paper ResNet101, InceptionV3, InceptionResNetV2, Xception, DenseNet121, MobileNetV2, ResNet50. algorithm has determine architectural fine-tuning hyperparameters. ResNet101 model performed best, rate 0.001 batch size 32 SGD optimizer at 40 epochs. These hyperparameters were optimized through along new set layers added model. newly include one flatten layer, dropout five dense search. obtained an accuracy 0.90, precision 0.92, recall f1-score 0.91. Further, was integrated attention mechanism enhance performance by focusing essential image features, notably where some regions may contain vital diagnostic information. proposed achieved 0.935, 0.93, 0.94 0.93.

Language: Английский

Citations

3

Systematic Review of Deep Learning Techniques in Skin Cancer Detection DOI Creative Commons
Carolina Magalhaes, Joaquim Mendes, Ricardo Vardasca

et al.

BioMedInformatics, Journal Year: 2024, Volume and Issue: 4(4), P. 2251 - 2270

Published: Nov. 14, 2024

Skin cancer is a serious health condition, as it can locally evolve into disfiguring states or metastasize to different tissues. Early detection of this disease critical because increases the effectiveness treatment, which contributes improved patient prognosis and reduced healthcare costs. Visual assessment histopathological examination are gold standards for diagnosing these types lesions. Nevertheless, processes strongly dependent on dermatologists’ experience, with excision advised only when suspected by physician. Multiple approaches have surfed over last few years, particularly those based deep learning (DL) strategies, goal assisting medical professionals in diagnosis process ultimately diminishing diagnostic uncertainty. This systematic review focused analysis relevant studies DL applications skin diagnosis. The qualitative included 164 records topic. AlexNet, ResNet-50, VGG-16, GoogLeNet architectures considered top choices obtaining best classification results, multiclassification current trend. Public databases key elements area should be maintained facilitate scientific research.

Language: Английский

Citations

3

Deep Neural Networks for Skin Cancer Classification: Analysis of Melanoma Cancer Data DOI Open Access
Stephen Afrifa, V. Vijayakumar, Peter Appiahene

et al.

Journal of Advances in Information Technology, Journal Year: 2025, Volume and Issue: 16(1), P. 1 - 11

Published: Jan. 1, 2025

The skin is the largest organ in human body, serving as its outermost covering.The protects body from elements and viruses, regulates temperature, provides cold, heat, touch sensations.A lesion a type of abnormality or on skin.Melanoma cancer most deadly deadliest family.Several researchers have developed noninvasive approaches for detecting technology has advanced.The early detection crucial treatment.In this study, we introduce deep neural network diagnosing melanoma stages using Convolutional Neural Network (CNN), Capsule (CapsNet), Gabor (GCN).To train models, International Skin Imaging Collaboration (ISIC) data used.Prior to deploying networks, methods such preprocessing dataset images remove noise lighting concerns better visual information are used.Deep Learning (DL) models employed classify images' lesions.The performance proposed evaluated cutting-edge metrics, results show that presented method beats state-of-the-art techniques.The achieve an average accuracy 90.30% CNN, 87.90% CapsNet, 86.80% GCN, demonstrating their capability recognize segment lesions.These developments enable health practitioners provide more accurate diagnoses help government healthcare systems with identification treatment initiatives.

Language: Английский

Citations

0

LightweightUNet: Multimodal Deep Learning with GAN-Augmented Imaging Data for Efficient Breast Cancer Detection DOI Creative Commons
Hari Mohan, Joon Yoo, Saurabh Agarwal

et al.

Bioengineering, Journal Year: 2025, Volume and Issue: 12(1), P. 73 - 73

Published: Jan. 15, 2025

Breast cancer ranks as the second most prevalent globally and is frequently diagnosed among women; therefore, early, automated, precise detection essential. Most AI-based techniques for breast are complex have high computational costs. Hence, to overcome this challenge, we presented innovative LightweightUNet hybrid deep learning (DL) classifier accurate classification of cancer. The proposed model boasts a low cost due its smaller number layers in architecture, adaptive nature stems from use depth-wise separable convolution. We employed multimodal approach validate model’s performance, using 13,000 images two distinct modalities: mammogram imaging (MGI) ultrasound (USI). collected datasets seven different sources, including benchmark DDSM, MIAS, INbreast, BrEaST, BUSI, Thammasat, HMSS. Since various resized them uniform size 256 × pixels normalized Box-Cox transformation technique. USI dataset smaller, applied StyleGAN3 generate 10,000 synthetic images. In work, performed separate experiments: first on real without augmentation + GAN-augmented our method. During experiments, used 5-fold cross-validation method, obtained good results (87.16% precision, 86.87% recall, 86.84% F1-score, accuracy) adding any extra data. Similarly, experiment provides better performance (96.36% 96.35% accuracy). This approach, which utilizes LightweightUNet, enhances by 9.20% 9.48% 9.51% increase accuracy combined dataset. works very well thanks creative network design, fake data, training These show that has lot potential clinical settings.

Language: Английский

Citations

0

Bionic Hand Control with Real-Time B-Mode Ultrasound Web AI Vision DOI
Yury Rusinovich, Volha Rusinovich, Markus Doß

et al.

Deleted Journal, Journal Year: 2025, Volume and Issue: 1(1), P. d050425 - d050425

Published: April 5, 2025

Aim: This basic research study aimed to assess the ability of Web AI Vision classify anatomical movement patterns in real-time B-mode ultrasound scans for controlling a virtual bionic limb. Methods: A MobileNetV2 model, implemented via TensorFlow.js library, was used transfer learning and feature extraction from 400 images distal forearm one individual participant, corresponding four different hand positions: 100 fist position, thumb palmar abduction, with an extended forefinger, open palm. Results: After 32 epochs training rate 0.001 batch size 16, model achieved 100% validation accuracy, test loss (crossentropy) 0.0067 differentiating associated specific positions. During manual testing 40 excluded training, validation, testing, able correctly predict position all cases (100%), mean predicted probability 98.9% (SD ± 0.6). When tested cine loops live scanning, successfully performed predictions 20 ms interval between predictions, achieving 50 per second. Conclusion: demonstrated Such ultrasound- AI-powered limbs can be easily automatically retrained recalibrated privacy-safe manner on client side, within web environment, without extensive computational costs. Using same scanner that controls limb, patients efficiently adjust new as needed, relying external services. The advantages this combination warrant further into muscle analysis utilization ultrasound-powered rehabilitation medicine, neuromuscular disease management, advanced prosthetic control amputees.

Language: Английский

Citations

0

Identification Improvement of Malignant Skin Diseases for Diverse Skin Tones with Grad-CAM DOI

Audrey Na

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 129 - 140

Published: Jan. 1, 2025

Language: Английский

Citations

0