A hybrid CNN–KNN approach for identification of COVID-19 with 5-fold cross validation DOI Creative Commons

Zarin Anjuman Sejuti,

Md. Saiful Islam

Sensors International, Journal Year: 2023, Volume and Issue: 4, P. 100229 - 100229

Published: Jan. 1, 2023

The novel coronavirus is the new member of SARS family, which can cause mild to severe infection in lungs and other vital organs like heart, kidney liver. For detecting COVID-19 from images, traditional ANN be employed. This method begins by extracting features then feeding into a suitable classifier. classification rate not so high as feature extraction dependent on experimenters' expertise. To solve this drawback, hybrid CNN-KNN-based model with 5-fold cross-validation proposed classify covid-19 or non-covid19 CT scans patients. At first, some pre-processing steps contrast enhancement, median filtering, data augmentation, image resizing are performed. Secondly, entire dataset divided five equal sections folds for training testing. By doing cross-validation, generalization ensured overfitting network prevented. CNN consists four convolutional layers, max-pooling two fully connected layers combined 23 layers. architecture used extractor case. taken model's fourth layer finally, classified using K Nearest Neighbor rather than softmax better accuracy. conducted over an augmented 4085 scan images. average accuracy, precision, recall F1 score after performing 98.26%, 99.42%,97.2% 98.19%, respectively. method's accuracy comparable existing works described further, where state art custom models were used. Hence, diagnose patients higher efficiency.

Language: Английский

Review of COVID-19 testing and diagnostic methods DOI Creative Commons
Olena Filchakova,

Dina Dossym,

Aisha Ilyas

et al.

Talanta, Journal Year: 2022, Volume and Issue: 244, P. 123409 - 123409

Published: April 1, 2022

More than six billion tests for COVID-19 has been already performed in the world. The testing SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus-2) virus and corresponding human antibodies is essential not only diagnostics treatment of infection by medical institutions, but also as a pre-requisite major semi-normal economic social activities such international flights, off line work study offices, access to malls, sport events. Accuracy, sensitivity, specificity, time results cost per test are parameters those even minimal improvement any them may have noticeable impact on life many countries We described, analyzed compared methods detection, while representing their 22 tables. Also, we performance some FDA approved kits with clinical non-FDA just described scientific literature. RT-PCR still remains golden standard detection virus, pressing need alternative less expensive, more rapid, point care evident. Those that eventually get developed satisfy this explained, discussed, quantitatively compared. review bioanalytical chemistry prospective, it be interesting broader circle readers who interested understanding testing, helping leave pandemic past.

Language: Английский

Citations

182

Medical image data augmentation: techniques, comparisons and interpretations DOI Open Access
Evgin Göçeri

Artificial Intelligence Review, Journal Year: 2023, Volume and Issue: 56(11), P. 12561 - 12605

Published: March 20, 2023

Language: Английский

Citations

182

Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks DOI Creative Commons
Sajid Nazir, Diane M. Dickson, Muhammad Usman Akram

et al.

Computers in Biology and Medicine, Journal Year: 2023, Volume and Issue: 156, P. 106668 - 106668

Published: Feb. 20, 2023

Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite results, widespread adoption these in clinical practice is still taking place at a moderate pace. One major hindrance that trained Deep Neural Networks (DNN) model provides prediction, but questions about why and how prediction was made remain unanswered. This linkage utmost importance for regulated healthcare domain to increase trust automated system by practitioners, patients other stakeholders. The application medical imaging has be interpreted caution due health safety concerns similar blame attribution case an accident involving autonomous cars. consequences both false positive negative cases are far reaching patients' welfare cannot ignored. exacerbated fact state-of-the-art algorithms comprise complex interconnected structures, millions parameters, 'black box' nature, offering little understanding inner working unlike traditional machine algorithms. Explainable AI (XAI) help understand predictions which develop system, accelerate diagnosis, meet adherence regulatory requirements. survey comprehensive review promising field XAI biomedical diagnostics. We also provide categorization techniques, discuss open challenges, future directions would interest clinicians, regulators developers.

Language: Английский

Citations

124

COVID-19 image classification using deep learning: Advances, challenges and opportunities DOI Open Access
Priya Aggarwal, Narendra Kumar Mishra, Binish Fatimah

et al.

Computers in Biology and Medicine, Journal Year: 2022, Volume and Issue: 144, P. 105350 - 105350

Published: March 3, 2022

Language: Английский

Citations

118

Complex features extraction with deep learning model for the detection of COVID19 from CT scan images using ensemble based machine learning approach DOI

Md. Robiul Islam,

Md. Nahiduzzaman

Expert Systems with Applications, Journal Year: 2022, Volume and Issue: 195, P. 116554 - 116554

Published: Feb. 4, 2022

Language: Английский

Citations

109

Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods DOI Creative Commons
Shahab S. Band,

Atefeh Yarahmadi,

Chung-Chian Hsu

et al.

Informatics in Medicine Unlocked, Journal Year: 2023, Volume and Issue: 40, P. 101286 - 101286

Published: Jan. 1, 2023

This paper investigates the applications of explainable AI (XAI) in healthcare, which aims to provide transparency, fairness, accuracy, generality, and comprehensibility results obtained from ML algorithms decision-making systems. The black box nature systems has remained a challenge interpretable techniques can potentially address this issue. Here we critically review previous studies related interpretability methods medical Descriptions various types XAI such as layer-wise relevance propagation (LRP), Uniform Manifold Approximation Projection (UMAP), Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), ANCHOR, contextual importance utility (CIU), Training calibration-based explainers (TraCE), Gradient-weighted Class Activation Mapping (Grad-CAM), t-distributed Stochastic Neighbor Embedding (t-SNE), NeuroXAI, Explainable Cumulative Fuzzy Membership Criterion (X-CFCMC) along with diseases be explained through these are provided throughout paper. also discusses how technologies transform healthcare services. usability reliability presented summarized, including on XGBoost for mediastinal cysts tumors, 3D brain tumor segmentation network, TraCE method image analysis. Overall, contribute growing field insights researchers, practitioners, decision-makers industry. Finally, discuss performance applied health care It is needed mention that brief implemented methodology section.

Language: Английский

Citations

105

ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration DOI
Omneya Attallah

Computers in Biology and Medicine, Journal Year: 2022, Volume and Issue: 142, P. 105210 - 105210

Published: Jan. 5, 2022

Language: Английский

Citations

86

Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review DOI Creative Commons
Felipe Giuste, Wenqi Shi, Yuanda Zhu

et al.

IEEE Reviews in Biomedical Engineering, Journal Year: 2022, Volume and Issue: 16, P. 5 - 21

Published: June 23, 2022

Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during pandemic, few have made a significant clinical impact, especially in diagnosis and disease precision staging. One major cause for such low impact is lack of model transparency, significantly limiting AI adoption real practice. To solve this problem, models need be explained users. Thus, we conducted comprehensive study Explainable (XAI) using PRISMA technology. Our findings suggest that XAI can improve performance, instill trust users, assist users decision-making. In systematic review, introduce common techniques their utility with specific examples application. We discuss evaluation results because it an important step maximizing value AI-based decision support systems. Additionally, present traditional, modern, advanced demonstrate evolution techniques. Finally, provide best practice guideline developers refer experimentation. also offer potential This hopefully, promote biomedicine healthcare.

Language: Английский

Citations

85

A survey on the interpretability of deep learning in medical diagnosis DOI Open Access

Qiaoying Teng,

Zhe Liu, Yuqing Song

et al.

Multimedia Systems, Journal Year: 2022, Volume and Issue: 28(6), P. 2335 - 2355

Published: June 25, 2022

Language: Английский

Citations

84

Deep learning techniques for detection and prediction of pandemic diseases: a systematic literature review DOI Creative Commons
Sunday Adeola Ajagbe, Matthew O. Adigun

Multimedia Tools and Applications, Journal Year: 2023, Volume and Issue: 83(2), P. 5893 - 5927

Published: May 29, 2023

Abstract Deep learning (DL) is becoming a fast-growing field in the medical domain and it helps timely detection of any infectious disease (IDs) essential to management diseases prediction future occurrences. Many scientists scholars have implemented DL techniques for pandemics, IDs other healthcare-related purposes, these outcomes are with various limitations research gaps. For purpose achieving an accurate, efficient less complicated DL-based system therefore, this study carried out systematic literature review (SLR) on pandemics using techniques. The survey anchored by four objectives state-of-the-art forty-five papers seven hundred ninety retrieved from different scholarly databases was analyze evaluate trend application areas pandemics. This used tables graphs extracted related articles online repositories analysis showed that good tool pandemic prediction. Scopus Web Science given attention current because they contain suitable scientific findings subject area. Finally, presents forty-four (44) studies technique performances. challenges identified include low performance model due computational complexities, improper labeling absence high-quality dataset among others. suggests possible solutions such as development improved or reduction output layer architecture pandemic-prone considerations.

Language: Английский

Citations

68