Machine Learning, Journal Year: 2024, Volume and Issue: 113(8), P. 5467 - 5494
Published: Jan. 22, 2024
Language: Английский
Machine Learning, Journal Year: 2024, Volume and Issue: 113(8), P. 5467 - 5494
Published: Jan. 22, 2024
Language: Английский
Computers in Biology and Medicine, Journal Year: 2023, Volume and Issue: 156, P. 106668 - 106668
Published: Feb. 20, 2023
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite results, widespread adoption these in clinical practice is still taking place at a moderate pace. One major hindrance that trained Deep Neural Networks (DNN) model provides prediction, but questions about why and how prediction was made remain unanswered. This linkage utmost importance for regulated healthcare domain to increase trust automated system by practitioners, patients other stakeholders. The application medical imaging has be interpreted caution due health safety concerns similar blame attribution case an accident involving autonomous cars. consequences both false positive negative cases are far reaching patients' welfare cannot ignored. exacerbated fact state-of-the-art algorithms comprise complex interconnected structures, millions parameters, 'black box' nature, offering little understanding inner working unlike traditional machine algorithms. Explainable AI (XAI) help understand predictions which develop system, accelerate diagnosis, meet adherence regulatory requirements. survey comprehensive review promising field XAI biomedical diagnostics. We also provide categorization techniques, discuss open challenges, future directions would interest clinicians, regulators developers.
Language: Английский
Citations
124Informatics in Medicine Unlocked, Journal Year: 2023, Volume and Issue: 40, P. 101286 - 101286
Published: Jan. 1, 2023
This paper investigates the applications of explainable AI (XAI) in healthcare, which aims to provide transparency, fairness, accuracy, generality, and comprehensibility results obtained from ML algorithms decision-making systems. The black box nature systems has remained a challenge interpretable techniques can potentially address this issue. Here we critically review previous studies related interpretability methods medical Descriptions various types XAI such as layer-wise relevance propagation (LRP), Uniform Manifold Approximation Projection (UMAP), Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), ANCHOR, contextual importance utility (CIU), Training calibration-based explainers (TraCE), Gradient-weighted Class Activation Mapping (Grad-CAM), t-distributed Stochastic Neighbor Embedding (t-SNE), NeuroXAI, Explainable Cumulative Fuzzy Membership Criterion (X-CFCMC) along with diseases be explained through these are provided throughout paper. also discusses how technologies transform healthcare services. usability reliability presented summarized, including on XGBoost for mediastinal cysts tumors, 3D brain tumor segmentation network, TraCE method image analysis. Overall, contribute growing field insights researchers, practitioners, decision-makers industry. Finally, discuss performance applied health care It is needed mention that brief implemented methodology section.
Language: Английский
Citations
100IEEE Geoscience and Remote Sensing Letters, Journal Year: 2023, Volume and Issue: 20, P. 1 - 5
Published: Jan. 1, 2023
An interpretable deep learning framework for land use and cover classification (LULC) in remote sensing using SHAP is introduced. It utilizes a compact CNN model the of satellite images then feeds results to explainer so as strengthen results. The proposed applied Sentinel-2 containing 27000 pixel size 64 × operates on three-band combinations, reducing model's input data by 77% considering that 13 channels are available, while at same time investigating how different spectrum bands affect predictions dataset's classes. Experimental EuroSAT dataset demonstrate CNN's accurate with an overall accuracy 94.72%, whereas combinations each classes highlights its improvement when compared standard approaches larger number trainable parameters. explainable shield network's showing correlation values relevant predicted class, thereby improving classifications occurring urban rural areas uses scene.
Language: Английский
Citations
81Asian Journal of Psychiatry, Journal Year: 2022, Volume and Issue: 79, P. 103316 - 103316
Published: Nov. 7, 2022
Language: Английский
Citations
72Environmental Science & Technology, Journal Year: 2023, Volume and Issue: 57(46), P. 17690 - 17706
Published: May 24, 2023
Chemical toxicity evaluations for drugs, consumer products, and environmental chemicals have a critical impact on human health. Traditional animal models to evaluate chemical are expensive, time-consuming, often fail detect toxicants in humans. Computational toxicology is promising alternative approach that utilizes machine learning (ML) deep (DL) techniques predict the potentials of chemicals. Although applications ML- DL-based computational predictions attractive, many "black boxes" nature difficult interpret by toxicologists, which hampers risk assessments using these models. The recent progress interpretable ML (IML) computer science field meets this urgent need unveil underlying mechanisms elucidate domain knowledge In review, we focused IML toxicology, including feature data, model interpretation methods, use base frameworks development, applications. challenges future directions modeling also discussed. We hope review can encourage efforts developing with new algorithms assist illustrating
Language: Английский
Citations
50Engineering Applications of Artificial Intelligence, Journal Year: 2024, Volume and Issue: 131, P. 107829 - 107829
Published: Jan. 9, 2024
Language: Английский
Citations
25International Journal of Information Technology, Journal Year: 2024, Volume and Issue: 16(3), P. 1279 - 1292
Published: Jan. 2, 2024
Abstract The big Artificial General Intelligence models inspire hot topics currently. black box problems of (AI) still exist and need to be solved urgently, especially in the medical area. Therefore, transparent reliable AI with small data are also urgently necessary. To build a trustable model data, we proposed prior knowledge-integrated transformer model. We first acquired knowledge using Shapley Additive exPlanations from various pre-trained machine learning models. Then, used construct compared our Feature Tokenization Transformer other classification tested on three open datasets one non-open public dataset Japan confirm feasibility methodology. Our results certified that perform better (1%) than general Meanwhile, methodology identified self-attention factors is nearly same, which needs explored future work. Moreover, research inspires endeavors exploring
Language: Английский
Citations
23Published: Jan. 3, 2025
The mixing of superior deep learning strategies has profoundly impacted the sector disease detection, promising sizable advancements in diagnostic accuracy and performance. This chapter explores utilization multiscale convolutional layers, interest mechanisms, switch learning, generative adversarial networks (GANs), self-supervised healthcare domain. These techniques collectively beautify capability neural (CNNs) to discover diagnose diseases from medical images with extraordinary precision. Multiscale layers allow models capture features at numerous scales, improving sensitivity specificity mainly situations like most cancers. Attention mechanisms similarly refine this process by allowing focus on applicable components a image, mirroring meticulous examination human professionals. Transfer leveraging pretrained models, extensively reduces reliance large, categorized datasets, thereby expediting development enhancing version accuracy. approach shown outstanding success throughout distinctive imaging modalities, X-rays CT scans, adaptability robustness models. GANs contribute via producing artificial records augment training addressing challenge limited data availability model performance, specifically rare scenarios. Self-supervised which trains unlabeled proxy duties, demonstrated comparable performance fully supervised while requiring fewer samples, therefore lowering need for costly time-consuming annotation. Innovations those regions are no longer handiest improvements technical overall detection but additionally open new avenues their application. Future studies instructions consist exploration multi-modal mixes various assets including genomic information digital health data, imparting more complete perspective. implementation federated guarantees privacy decentralized assets. Explainable AI (XAI) enhance interpretability, fostering extra consideration popularity amongst Moreover, integration wearable devices continuous fitness tracking improvement real-time adaptive hold tremendous promise revolutionizing patient care control. comprehensive method methodologies identification underscores transformative potential healthcare. With aid modern-day demanding exploring progressive answers, we can pave way greater accurate, efficient, personalized systems, end results advancing same old exercise.
Language: Английский
Citations
4Neural Computing and Applications, Journal Year: 2023, Volume and Issue: 35(15), P. 11459 - 11475
Published: March 10, 2023
Language: Английский
Citations
40Diagnostics, Journal Year: 2023, Volume and Issue: 13(10), P. 1692 - 1692
Published: May 10, 2023
Intrauterine fetal demise in women during pregnancy is a major contributing factor prenatal mortality and global issue developing underdeveloped countries. When an unborn fetus passes away the womb 20th week of or later, early detection can help reduce chances intrauterine demise. Machine learning models such as Decision Trees, Random Forest, SVM Classifier, KNN, Gaussian Naïve Bayes, Adaboost, Gradient Boosting, Voting Neural Networks are trained to determine whether health Normal, Suspect, Pathological. This work uses 22 features related heart rate obtained from Cardiotocogram (CTG) clinical procedure for 2126 patients. Our paper focuses on applying various cross-validation techniques, namely, K-Fold, Hold-Out, Leave-One-Out, Leave-P-Out, Monte Carlo, Stratified K-fold, Repeated above ML algorithms enhance them best performing algorithm. We conducted exploratory data analysis obtain detailed inferences features. Boosting Classifier achieved 99% accuracy after techniques. The dataset used has dimension × 22, label multiclass classified Pathological condition. Apart incorporating strategies several machine algorithms, research Blackbox evaluation, which Interpretable Learning Technique understand underlying working mechanism each model means by it picks train predict values.
Language: Английский
Citations
33