AI-Enhanced Diagnosis: Pediatric Chest X-ray Classification for Bronchiolitis and Pneumonia DOI
Naveen Gehlot,

Khushi Soni,

P Kothari

и другие.

Опубликована: Ноя. 22, 2023

The pediatric diseases in question are bronchiolitis and pneumonia, which pose a significant threat to children, especially those under ten years of age. Rapid diagnosis often requires chest X-ray; reading interpreting these images is challenging, the expertise skilled doctor. It essential take advantage advanced image recognition techniques aid examinations extracting necessary information. This study employed deep transfer learning models, including VGG16, VGG19, MobileNetV2, InceptionResNetV2, diagnose pneumonia X-rays (PCXr) for first time. Our findings show that InceptionResNetV2 model has achieved highest recall rate bronchiolitis, with an impressive value 78.82%. Following that, VGG16 77.64%, MobileNetV2 at 74.11%, VGG19 62.35%. Furthermore, when assessing models comprehensively based on their performance terms F-score, outperformed others F-score 65.68%.

Язык: Английский

HydraViT: Adaptive multi-branch transformer for multi-label disease classification from Chest X-ray images DOI
Şaban Öztürk, Mehmet Y. Turali, Tolga Çukur

и другие.

Biomedical Signal Processing and Control, Год журнала: 2024, Номер 100, С. 106959 - 106959

Опубликована: Сен. 30, 2024

Язык: Английский

Процитировано

8

Learning to Generalize towards Unseen Domains via a Content-Aware Style Invariant Model for Disease Detection from Chest X-rays DOI

Mohammad Zunaed,

Md. Aynal Haque, Taufiq Hasan

и другие.

IEEE Journal of Biomedical and Health Informatics, Год журнала: 2024, Номер 28(6), С. 3626 - 3636

Опубликована: Март 5, 2024

Performance degradation due to distribution discrepancy is a longstanding challenge in intelligent imaging, particularly for chest X-rays (CXRs). Recent studies have demonstrated that CNNs are biased toward styles (e.g., uninformative textures) rather than content shape), stark contrast the human vision system. Radiologists tend learn visual cues from CXRs and thus perform well across multiple domains. Motivated by this, we employ novel on-the-fly style randomization modules at both image (SRM-IL) feature (SRM-FL) levels create rich perturbed features while keeping intact robust cross-domain performance. Previous methods simulate unseen domains constructing new via interpolation or swapping existing data, limiting them available source during training. However, SRM-IL samples statistics possible value range of CXR instead training data achieve more diversified augmentations. Moreover, utilize pixel-wise learnable parameters SRM-FL compared pre-defined channel-wise mean standard deviations as embeddings capturing representative features. Additionally, leverage consistency regularizations on global semantic predictive distributions with without style-perturbed versions same tweak model's sensitivity markers accurate predictions. Our proposed method, trained CheXpert MIMIC-CXR datasets, achieves 77.32±0.35, 88.38±0.19, 82.63±0.13 AUCs(%) domain test i.e., BRAX, VinDr-CXR, NIH X-ray14, respectively, 75.56±0.80, 87.57±0.46, 82.07±0.19 state-of-the-art models five-fold cross-validation statistically significant results thoracic disease classification.

Язык: Английский

Процитировано

4

Reconstruction-based approach for chest X-ray image segmentation and enhanced multi-label chest disease classification DOI Creative Commons
Aya Hage Chehade, Nassib Abdallah, Jean-Marie Marion

и другие.

Artificial Intelligence in Medicine, Год журнала: 2025, Номер 165, С. 103135 - 103135

Опубликована: Апрель 23, 2025

U-Net is a commonly used model for medical image segmentation. However, when applied to chest X-ray images that show pathologies, it often fails include these critical pathological areas in the generated masks. To address this limitation, our study, we tackled challenge of precise segmentation and mask generation by developing novel approach, using CycleGAN, encompasses affected pathologies within region interest, allowing extraction relevant radiomic features linked pathologies. Furthermore, adopted feature selection approach focus analysis on most significant features. The results proposed pipeline are promising, with an average accuracy 92.05% AUC 89.48% multi-label classification effusion infiltration acquired from ChestX-ray14 dataset, XGBoost model. applying methodology 14 diseases dataset resulted 83.12%, outperforming previous studies. This research highlights importance effective accurate diseases. promising underscore its potential broader applications

Язык: Английский

Процитировано

0

Curriculum-Based Augmented Fourier Domain Adaptation for Robust Medical Image Segmentation DOI Creative Commons
An Wang, Mobarakol Islam, Mengya Xu

и другие.

IEEE Transactions on Automation Science and Engineering, Год журнала: 2023, Номер 21(3), С. 4340 - 4352

Опубликована: Июль 24, 2023

Accurate and robust medical image segmentation is fundamental crucial for enhancing the autonomy of computer-aided diagnosis intervention systems. Medical data collection normally involves different scanners, protocols, populations, making domain adaptation (DA) a highly demanding research field to alleviate model degradation in deployment site. To preserve performance across multiple testing domains, this work proposes Curriculum-based Augmented Fourier Domain Adaptation (Curri-AFDA) segmentation. In particular, our curriculum learning strategy based on causal relationship under levels shift phase, where higher is, harder recognize variance. Considering this, we progressively introduce more amplitude information from target source frequency space during curriculum-style training smoothly schedule semantic knowledge transfer an easier-to-harder manner. Besides, incorporate training-time chained augmentation mixing help expand distributions while preserving domain-invariant semantics, which beneficial acquired be generalize better unseen domains. Extensive experiments two tasks Retina Nuclei collected sites scanners suggest that proposed method yields superior generalization performance. Meanwhile, approach proves various corruption types increasing severity levels. addition, show also domain-adaptive classification task with skin lesion datasets. The code available at https://github.com/lofrienger/Curri-AFDA. Note Practitioners —Medical key improving computer-assisted autonomy. However, due gaps between sites, deep learning-based models frequently encounter when deployed novel domain. Moreover, robustness expected mitigate effects corruption. all these yet practical needs automate applications benefit healthcare, propose cross-domain datasets consistent superiority regarding domains against synthetic corrupted data. independent modalities because its efficacy does not rely modality-specific characteristics. demonstrate besides ablation study. Therefore, can potentially applied many yield improved Future works may extended by exploring integration regime fusion time rather than like most other existing works.

Язык: Английский

Процитировано

6

A systematic review of generalization research in medical image classification DOI Creative Commons
Sarah Matta,

M. Lamard,

Philippe Zhang

и другие.

Computers in Biology and Medicine, Год журнала: 2024, Номер 183, С. 109256 - 109256

Опубликована: Окт. 20, 2024

Язык: Английский

Процитировано

2

ThoraX-PriorNet: A Novel Attention-Based Architecture Using Anatomical Prior Probability Maps for Thoracic Disease Classification DOI Creative Commons
Md Iqbal Hossain,

Mohammad Zunaed,

Md Kawsar Ahmed

и другие.

IEEE Access, Год журнала: 2023, Номер 12, С. 3256 - 3273

Опубликована: Дек. 22, 2023

Objective: Computer-aided disease diagnosis and prognosis based on medical images is a rapidly emerging field. Many Convolutional Neural Network (CNN) architectures have been developed by researchers for classification localization from chest X-ray images. It known that different thoracic lesions are more likely to occur in specific anatomical regions compared others. This article aims incorporate this region-dependent prior probability distribution within deep learning framework. xmlns:xlink="http://www.w3.org/1999/xlink">Methods: We present the ThoraX-PriorNet, novel attention-based CNN model classification. first estimate disease-dependent spatial probability, i.e., an prior, indicates of occurrence region image. Next, we develop combines information estimated automatically extracted interest (ROI) masks provide attention feature maps generated convolution network. Unlike previous works utilize various self-attention mechanisms, proposed method leverages ROI along with probabilistic information, which selects diseases attention. xmlns:xlink="http://www.w3.org/1999/xlink">Results: The shows superior performance NIH ChestX-ray14 dataset existing state-of-the-art methods while reaching area under ROC curve (%AUC) 84.67. Regarding localization, anatomy competitive methods, achieving accuracy 0.80, 0.63, 0.49, 0.33, 0.28, 0.21, 0.04 Intersection over Union (IoU) threshold 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, respectively. xmlns:xlink="http://www.w3.org/1999/xlink">Impact Statement: ThoraX-PriorNet can be generalized image tasks where lesion dependent sites.

Язык: Английский

Процитировано

4

Breaking Down Covariate Shift on Pneumothorax Chest X-Ray Classification DOI
Bogdan Bercean,

Alexandru Buburuzan,

Andreea Birhala

и другие.

Lecture notes in computer science, Год журнала: 2023, Номер unknown, С. 157 - 166

Опубликована: Янв. 1, 2023

Язык: Английский

Процитировано

1

Learning to Generalize towards Unseen Domains via a Content-Aware Style Invariant Model for Disease Detection from Chest X-rays DOI Creative Commons

Mohammad Zunaed,

Md. Aynal Haque, Taufiq Hasan

и другие.

arXiv (Cornell University), Год журнала: 2023, Номер unknown

Опубликована: Янв. 1, 2023

Performance degradation due to distribution discrepancy is a longstanding challenge in intelligent imaging, particularly for chest X-rays (CXRs). Recent studies have demonstrated that CNNs are biased toward styles (e.g., uninformative textures) rather than content shape), stark contrast the human vision system. Radiologists tend learn visual cues from CXRs and thus perform well across multiple domains. Motivated by this, we employ novel on-the-fly style randomization modules at both image (SRM-IL) feature (SRM-FL) levels create rich perturbed features while keeping intact robust cross-domain performance. Previous methods simulate unseen domains constructing new via interpolation or swapping existing data, limiting them available source during training. However, SRM-IL samples statistics possible value range of CXR instead training data achieve more diversified augmentations. Moreover, utilize pixel-wise learnable parameters SRM-FL compared pre-defined channel-wise mean standard deviations as embeddings capturing representative features. Additionally, leverage consistency regularizations on global semantic predictive distributions with without style-perturbed versions same tweak model's sensitivity markers accurate predictions. Our proposed method, trained CheXpert MIMIC-CXR datasets, achieves 77.32$\pm$0.35, 88.38$\pm$0.19, 82.63$\pm$0.13 AUCs(%) domain test i.e., BRAX, VinDr-CXR, NIH X-ray14, respectively, 75.56$\pm$0.80, 87.57$\pm$0.46, 82.07$\pm$0.19 state-of-the-art models five-fold cross-validation statistically significant results thoracic disease classification.

Язык: Английский

Процитировано

0

AI-Enhanced Diagnosis: Pediatric Chest X-ray Classification for Bronchiolitis and Pneumonia DOI
Naveen Gehlot,

Khushi Soni,

P Kothari

и другие.

Опубликована: Ноя. 22, 2023

The pediatric diseases in question are bronchiolitis and pneumonia, which pose a significant threat to children, especially those under ten years of age. Rapid diagnosis often requires chest X-ray; reading interpreting these images is challenging, the expertise skilled doctor. It essential take advantage advanced image recognition techniques aid examinations extracting necessary information. This study employed deep transfer learning models, including VGG16, VGG19, MobileNetV2, InceptionResNetV2, diagnose pneumonia X-rays (PCXr) for first time. Our findings show that InceptionResNetV2 model has achieved highest recall rate bronchiolitis, with an impressive value 78.82%. Following that, VGG16 77.64%, MobileNetV2 at 74.11%, VGG19 62.35%. Furthermore, when assessing models comprehensively based on their performance terms F-score, outperformed others F-score 65.68%.

Язык: Английский

Процитировано

0