Classifying Customers’ Journey from Online Reviews of Amazon Fresh via Sentiment Analysis and Topic Modelling DOI
Jyoti Rana, Loveleen Gaur, K. C. Santosh

et al.

Published: Nov. 15, 2022

A positive customer journey experience is necessary to maintain loyalty in online retailing. After the outbreak of Covid-19, there has been a significant increase number customers who buy groceries. Due anonymity and convenience throughout journey, E-grocery shopping platforms have become reliable source for gathering reviews. In study, we used text mining machine learning (ML) models an e-grocery review database from Amazon Fresh website forecast feelings data set. To be more specific, this study aimed determine whether are satisfied with purchase products or not. Further, aims analyze would recommend purchased For sentiment analysis sample 78,619 reviews was used. We linguistic approach consisting ML dictionary scoring algorithms customers' based on their Topic modeling (TM) 3,26,120 reveal "themes" grasp better knowledge experiences.

Language: Английский

Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research DOI Creative Commons
Zhibo Zhang, Hussam Al Hamadi, Ernesto Damiani

et al.

IEEE Access, Journal Year: 2022, Volume and Issue: 10, P. 93104 - 93139

Published: Jan. 1, 2022

This survey presents a comprehensive review of current literature on Explainable Artificial Intelligence (XAI) methods for cyber security applications. Due to the rapid development Internet-connected systems and in recent years, including Machine Learning (ML) Deep (DL) has been widely utilized fields intrusion detection, malware spam filtering. However, although Intelligence-based approaches detection defense attacks threats are more advanced efficient compared conventional signature-based rule-based strategies, most ML-based techniques DL-based deployed black-box manner, meaning that experts customers unable explain how such procedures reach particular conclusions. The deficiencies transparency interpretability existing would decrease human users' confidence models against attacks, especially situations where become increasingly diverse complicated. Therefore, it is essential apply XAI establishment create explainable while maintaining high accuracy allowing users comprehend, trust, manage next generation mechanisms. Although there papers reviewing applications areas vast applying many healthcare, financial services, criminal justice, surprising fact currently no research articles concentrate security.

Language: Английский

Citations

181

The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review DOI Creative Commons
Subhan Ali, Filza Akhlaq, Ali Shariq Imran

et al.

Computers in Biology and Medicine, Journal Year: 2023, Volume and Issue: 166, P. 107555 - 107555

Published: Oct. 4, 2023

In domains such as medical and healthcare, the interpretability explainability of machine learning artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, incorrect diagnoses or treatments, can have severe even life-threatening consequences patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged a popular area research, focused on understanding black-box nature complex hard-to-interpret models. While humans increase accuracy models through technical expertise, how actually function during training be difficult impossible. XAI algorithms Local Interpretable Model-Agnostic Explanations (LIME) SHapley Additive exPlanations (SHAP) provide explanations models, improving predictions providing feature importance increasing confidence systems. Many articles been published that propose solutions to problems using alongside explainability. our study, we identified 454 from 2018-2022 analyzed 93 them explore use techniques domain.

Language: Английский

Citations

121

Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI DOI
Mohan Bhandari, Tej Bahadur Shahi, Birat Siku

et al.

Computers in Biology and Medicine, Journal Year: 2022, Volume and Issue: 150, P. 106156 - 106156

Published: Oct. 3, 2022

Language: Английский

Citations

100

Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer’s disease detection DOI Creative Commons
Vimbi Viswan,

Noushath Shaffi,

Mufti Mahmud

et al.

Brain Informatics, Journal Year: 2024, Volume and Issue: 11(1)

Published: April 5, 2024

Abstract Explainable artificial intelligence (XAI) has gained much interest in recent years for its ability to explain the complex decision-making process of machine learning (ML) and deep (DL) models. The Local Interpretable Model-agnostic Explanations (LIME) Shaply Additive exPlanation (SHAP) frameworks have grown as popular interpretive tools ML DL This article provides a systematic review application LIME SHAP interpreting detection Alzheimer’s disease (AD). Adhering PRISMA Kitchenham’s guidelines, we identified 23 relevant articles investigated these frameworks’ prospective capabilities, benefits, challenges depth. results emphasise XAI’s crucial role strengthening trustworthiness AI-based AD predictions. aims provide fundamental capabilities XAI enhancing fidelity within clinical decision support systems prognosis.

Language: Английский

Citations

58

Explainable Artificial Intelligence in Alzheimer’s Disease Classification: A Systematic Review DOI Creative Commons
Vimbi Viswan,

Noushath Shaffi,

Mufti Mahmud

et al.

Cognitive Computation, Journal Year: 2023, Volume and Issue: 16(1), P. 1 - 44

Published: Nov. 13, 2023

Abstract The unprecedented growth of computational capabilities in recent years has allowed Artificial Intelligence (AI) models to be developed for medical applications with remarkable results. However, a large number Computer Aided Diagnosis (CAD) methods powered by AI have limited acceptance and adoption the domain due typical blackbox nature these models. Therefore, facilitate among practitioners, models' predictions must explainable interpretable. emerging field (XAI) aims justify trustworthiness predictions. This work presents systematic review literature reporting Alzheimer's disease (AD) detection using XAI that were communicated during last decade. Research questions carefully formulated categorise into different conceptual approaches (e.g., Post-hoc, Ante-hoc, Model-Agnostic, Model-Specific, Global, Local etc.) frameworks (Local Interpretable Model-Agnostic Explanation or LIME, SHapley Additive exPlanations SHAP, Gradient-weighted Class Activation Mapping GradCAM, Layer-wise Relevance Propagation LRP, XAI. categorisation provides broad coverage interpretation spectrum from intrinsic Ante-hoc models) complex patterns Post-hoc taking local explanations global scope. Additionally, forms interpretations providing in-depth insight factors support clinical diagnosis AD are also discussed. Finally, limitations, needs open challenges research outlined possible prospects their usage detection.

Language: Английский

Citations

50

An Explainable AI Paradigm for Alzheimer’s Diagnosis Using Deep Transfer Learning DOI Creative Commons
Tanjim Mahmud,

Koushick Barua,

Sultana Umme Habiba

et al.

Diagnostics, Journal Year: 2024, Volume and Issue: 14(3), P. 345 - 345

Published: Feb. 5, 2024

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions of individuals worldwide, causing severe cognitive decline and memory impairment. The early accurate diagnosis AD crucial for effective intervention management. In recent years, deep learning techniques have shown promising results in medical image analysis, including from neuroimaging data. However, the lack interpretability models hinders their adoption clinical settings, where explainability essential gaining trust acceptance healthcare professionals. this study, we propose an explainable AI (XAI)-based approach disease, leveraging power transfer ensemble modeling. proposed framework aims to enhance by incorporating XAI techniques, allowing clinicians understand decision-making process providing valuable insights into diagnosis. By popular pre-trained convolutional neural networks (CNNs) such as VGG16, VGG19, DenseNet169, DenseNet201, conducted extensive experiments evaluate individual performances on comprehensive dataset. ensembles, Ensemble-1 (VGG16 VGG19) Ensemble-2 (DenseNet169 DenseNet201), demonstrated superior accuracy, precision, recall, F1 scores compared models, reaching up 95%. order transparency diagnosis, introduced novel model achieving impressive accuracy 96%. This incorporates saliency maps grad-CAM (gradient-weighted class activation mapping). integration these not only contributes model’s exceptional but also provides researchers with visual regions influencing Our findings showcase potential combining realm paving way more interpretable clinically relevant healthcare.

Language: Английский

Citations

34

Exploring the Capabilities of a Lightweight CNN Model in Accurately Identifying Renal Abnormalities: Cysts, Stones, and Tumors, Using LIME and SHAP DOI Creative Commons
Mohan Bhandari, Pratheepan Yogarajah, Muthu Subash Kavitha

et al.

Applied Sciences, Journal Year: 2023, Volume and Issue: 13(5), P. 3125 - 3125

Published: Feb. 28, 2023

Kidney abnormality is one of the major concerns in modern society, and it affects millions people around world. To diagnose different abnormalities human kidneys, a narrow-beam x-ray imaging procedure, computed tomography, used, which creates cross-sectional slices kidneys. Several deep-learning models have been successfully applied to computer tomography images for classification segmentation purposes. However, has difficult clinicians interpret model’s specific decisions and, thus, creating “black box” system. Additionally, integrate complex internet-of-medical-things devices due demanding training parameters memory-resource cost. overcome these issues, this study proposed (1) lightweight customized convolutional neural network detect kidney cysts, stones, tumors (2) understandable AI Shapely values based on Shapley additive explanation predictive results local interpretable model-agnostic explanations illustrate model. The CNN model performed better than other state-of-the-art methods obtained an accuracy 99.52 ± 0.84% K = 10-fold stratified sampling. With improved interpretive power, work provides with conclusive results.

Language: Английский

Citations

36

A Survey on Explainable Artificial Intelligence for Cybersecurity DOI
Gaith Rjoub, Jamal Bentahar, Omar Abdel Wahab

et al.

IEEE Transactions on Network and Service Management, Journal Year: 2023, Volume and Issue: 20(4), P. 5115 - 5140

Published: June 5, 2023

The "black-box" nature of artificial intelligence (AI) models has been the source many concerns in their use for critical applications. Explainable Artificial Intelligence (XAI) is a rapidly growing research field that aims to create machine learning can provide clear and interpretable explanations decisions actions. In cybersecurity, XAI potential revolutionize way we approach network system security by enabling us better understand behavior cyber threats design more effective defenses. this survey, review state art cybersecurity explore various approaches have proposed address important problem. follows systematic classification issues networks digital systems. We discuss challenges limitations current methods context outline promising directions future research.

Language: Английский

Citations

31

Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis DOI Creative Commons
Dost Muhammad, Malika Bendechache

Computational and Structural Biotechnology Journal, Journal Year: 2024, Volume and Issue: 24, P. 542 - 560

Published: Aug. 12, 2024

This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, exploring evaluation metrics used assess XAI approaches. With the growing efficiency of Machine Learning (ML) Deep (DL) in applications, there's a critical need for adoption healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance clinical settings have significant medicolegal consequences. Our highlights advanced methods, identifying how they address transparency trust ML/DL decisions. We also outline faced by these propose directions improve healthcare.This paper aims bridge gap between cutting-edge computational techniques practical application healthcare, nurturing more transparent, trustworthy, effective use AI settings. The insights guide both industry, promoting innovation standardisation implementation

Language: Английский

Citations

12

Explaining electroencephalogram channel and subband sensitivity for alcoholism detection DOI Creative Commons
Sandeep B. Sangle, Pramod Kachare, Digambar Puri

et al.

Computers in Biology and Medicine, Journal Year: 2025, Volume and Issue: 188, P. 109826 - 109826

Published: Feb. 18, 2025

Language: Английский

Citations

1