A novel large capacity coverless image steganography method based on hybrid framework of image generation-mapping DOI
Zhe Li, Qiuyu Zhang, Xiaopeng Li

и другие.

Multimedia Systems, Год журнала: 2024, Номер 30(6)

Опубликована: Ноя. 19, 2024

Язык: Английский

Deep learning approaches to detect breast cancer: a comprehensive review DOI

Amir Mohammad Sharafaddini,

Kiana Kouhpah Esfahani,

N. Mansouri

и другие.

Multimedia Tools and Applications, Год журнала: 2024, Номер unknown

Опубликована: Авг. 20, 2024

Язык: Английский

Процитировано

7

(KAUH-BCMD) dataset: advancing mammographic breast cancer classification with multi-fusion preprocessing and residual depth-wise network DOI Creative Commons
Asma’a Al-Mnayyis, Hasan Gharaibeh,

Mohammad Amin

и другие.

Frontiers in Big Data, Год журнала: 2025, Номер 8

Опубликована: Март 6, 2025

The categorization of benign and malignant patterns in digital mammography is a critical step the diagnosis breast cancer, facilitating early detection potentially saving many lives. Diverse tissue architectures often obscure conceal issues. Classifying worrying regions (benign patterns) mammograms significant challenge for radiologists. Even specialists, first visual indicators are nuanced irregular, complicating identification. Therefore, radiologists want an advanced classifier to assist identifying cancer categorizing concern. This study presents enhanced technique classification using images. collection comprises real-world data from King Abdullah University Hospital (KAUH) at Jordan Science Technology, consisting 7,205 photographs 5,000 patients aged 18-75. After being classified as or malignant, pictures underwent preprocessing by rescaling, normalization, augmentation. Multi-fusion approaches, such high-boost filtering contrast-limited adaptive histogram equalization (CLAHE), were used improve picture quality. We created unique Residual Depth-wise Network (RDN) enhance precision detection. suggested RDN model was compared with prominent models, including MobileNetV2, VGG16, VGG19, ResNet50, InceptionV3, Xception, DenseNet121. exhibited superior performance, achieving accuracy 97.82%, 96.55%, recall 99.19%, specificity 96.45%, F1 score 97.85%, validation 96.20%. findings indicate that proposed excellent instrument images significantly improves when integrated multi-fusion efficient approaches.

Язык: Английский

Процитировано

0

Dual-branch dynamic hierarchical U-Net with multi-layer space fusion attention for medical image segmentation DOI Creative Commons
Zhen Wang, Shuang Fu,

Hongguang Zhang

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Март 10, 2025

Accurate segmentation of organs or lesions from medical images is essential for accurate disease diagnosis and organ morphometrics. Previously, most researchers mainly added feature extraction modules simply aggregated the semantic features to U-Net network improve accuracy images. However, these improved networks ignore differences different in lack fusion high-level low-level features, which will lead blurred miss boundaries between similar diseased areas. To solve this problem, we propose Dual-branch dynamic hierarchical with multi-layer space attention (D2HU-Net). Firstly, a spatial module, makes shallow decoding path provide predictive graph supplement deep path. Under guidance higher useful context are selected lower obtain deeper information, up Secondly, multi-scale layered module that enhances representation at finer granularity level selectively refines single-scale features. Finally, provides guiding optimization subsequent based on loss functions. The experimental results four data sets show D2HU-Net enables advanced capabilities image datasets, can help doctors diagnose treat diseases

Язык: Английский

Процитировано

0

HAFMAB-Net: hierarchical adaptive fusion based on multilevel attention-enhanced bottleneck neural network for breast histopathological cancer classification DOI
Ali H. Abdulwahhab, Oğuz Bayat, Abdullahi Abdu İbrahim

и другие.

Signal Image and Video Processing, Год журнала: 2025, Номер 19(5)

Опубликована: Март 19, 2025

Язык: Английский

Процитировано

0

HiGMA-DADCN: Hirudinaria granulosa multitropic algorithm optimised double attention enabled deep convolutional neural network for psoriasis classification DOI Creative Commons

C S Soumya,

H. S. Jayanna

Computer Methods in Biomechanics and Biomedical Engineering Imaging & Visualization, Год журнала: 2025, Номер 13(1)

Опубликована: Апрель 8, 2025

Язык: Английский

Процитировано

0

Deep learning-based image annotation for leukocyte segmentation and classification of blood cell morphology DOI Creative Commons
Vatsala Anand, Sheifali Gupta, Deepika Koundal

и другие.

BMC Medical Imaging, Год журнала: 2024, Номер 24(1)

Опубликована: Апрель 8, 2024

Abstract The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. leukocyte dataset comprises four classes images such as monocytes, lymphocytes, eosinophils, neutrophils. Leukocyte is achieved through processing techniques, including background subtraction, noise removal, contouring. To get isolated mask creation, Erythrocytes Leukocytes creation are performed blood cell images. Isolated leukocytes then subjected to data augmentation brightness contrast adjustment, flipping, random shearing, improve generalizability CNN model. A deep Convolutional Neural Network (CNN) model employed augmented effective feature extraction classification. consists convolutional blocks having eleven layers, eight batch normalization Rectified Linear Unit (ReLU) dropout layers capture increasingly complex patterns. For this research, publicly available from Kaggle consisting total 12,444 types was used conduct experiments. Results showcase robustness proposed framework, achieving impressive performance metrics with an accuracy 97.98% precision 97.97%. These outcomes affirm efficacy devised approach accurately identifying categorizing leukocytes. combination advanced architecture meticulous pre-processing steps establishes foundation future developments field analysis.

Язык: Английский

Процитировано

4

A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare DOI

Kamini Lamba,

Shalli Rani

Journal of Neuroscience Methods, Год журнала: 2024, Номер 408, С. 110159 - 110159

Опубликована: Май 7, 2024

Язык: Английский

Процитировано

4

Fluorescence microscopy and histopathology image based cancer classification using graph convolutional network with channel splitting DOI
Asish Bera, Debotosh Bhattacharjee, Ondřej Krejcar

и другие.

Biomedical Signal Processing and Control, Год журнала: 2025, Номер 103, С. 107400 - 107400

Опубликована: Янв. 6, 2025

Язык: Английский

Процитировано

0

Enhanced breast cancer detection and classification via CAMR-Gabor filters and LSTM: A deep Learning-Based method DOI Creative Commons
Vinit Kumar, K. Chandrashekhara,

Naga Padmaja Jagini

и другие.

Egyptian Informatics Journal, Год журнала: 2025, Номер 29, С. 100602 - 100602

Опубликована: Янв. 8, 2025

Язык: Английский

Процитировано

0

Detection of Masses in Mammogram Images Based on the Enhanced RetinaNet Network With INbreast Dataset DOI Creative Commons
Mingzhao Wang, Ran Liu, Joseph Luttrell

и другие.

Journal of Multidisciplinary Healthcare, Год журнала: 2025, Номер Volume 18, С. 675 - 695

Опубликована: Фев. 1, 2025

Breast cancer is the most common major public health problems of women in world. Until now, analyzing mammogram images still main method used by doctors to diagnose and detect breast cancers. However, this process usually depends on experience radiologists always very time consuming. We propose introduce deep learning technology into for facilitation computer-aided diagnosis (CAD), address challenges class imbalance, enhance detection small masses multiple targets, reduce false positives negatives analysis. Therefore, we adopted enhanced RetinaNet images. Specifically, introduced a novel modification network structure, where feature map M5 processed ReLU function prior original convolution kernel. This strategic adjustment was designed prevent loss resolution mass features. Additionally, transfer techniques training through leveraging pre-trained weights from other applications, fine-tuned our improved model using INbreast dataset. The aforementioned innovations facilitate superior performance RetiaNet dataset INbreast, as evidenced mAP (mean average precision) 1.0000 TPR (true positive rate) 1.00 at 0.00 FPPI (false per image) experimental results demonstrate that defeats existing models having more generalization than published studies, it can also be applied types patients assist making proper diagnosis.

Язык: Английский

Процитировано

0