Enhancing winter road maintenance with explainable AI: SHAP analysis for interpreting machine learning models in road friction estimation DOI
X. X. Ding, Tae J. Kwon

Canadian Journal of Civil Engineering, Год журнала: 2024, Номер 51(5), С. 529 - 544

Опубликована: Янв. 11, 2024

Effective winter road maintenance relies on precise friction estimation. Machine learning (ML) models have shown significant promise in this; however, their inherent complexity makes understanding inner workings challenging. This paper addresses this issue by conducting a comparative analysis of estimation using four ML methods, including regression tree, random forest, eXtreme Gradient Boosting (XGBoost), and support vector (SVR). We then employ the SHapley Additive exPlanations (SHAP) explainable artificial intelligence (AI) to enhance model interpretability. Our an Alberta dataset reveals that XGBoost performs best with accuracy 91.39%. The SHAP illustrates logical relationships between predictor features within all three tree-based models, but it also uncovers inconsistencies SVR model, potentially attributed insufficient feature interactions. Thus, not only showcase role AI improving interpretability for estimation, provides practical insights could improve decisions.

Язык: Английский

Artificial intelligence capability and organizational performance: unraveling the mediating mechanisms of decision-making processes DOI
Suheil Neiroukh, Okechukwu Lawrence Emeagwali, Hasan Yousef Aljuhmani

и другие.

Management Decision, Год журнала: 2024, Номер unknown

Опубликована: Июнь 12, 2024

Purpose This study investigates the profound impact of artificial intelligence (AI) capabilities on decision-making processes and organizational performance, addressing a crucial gap in literature by exploring mediating role speed quality. Design/methodology/approach Drawing upon resource-based theory prior research, this constructs comprehensive model hypotheses to illuminate influence AI within organizations speed, decision quality, and, ultimately, performance. A dataset comprising 230 responses from diverse forms basis analysis, with employing partial least squares structural equation (PLS-SEM) for robust data examination. Findings The results demonstrate pivotal shaping capability significantly positively affects overall Notably, is critical factor contributing enhanced further uncovered mediation effects, suggesting that partially mediate relationship between performance through speed. Originality/value contributes existing body providing empirical evidence multifaceted Elucidating advances our understanding complex mechanisms which drive success.

Язык: Английский

Процитировано

21

First-principles and machine-learning approaches for interpreting and predicting the properties of MXenes DOI Creative Commons
José D. Gouveia, Tiago L. P. Galvão, Kais Iben Nassar

и другие.

npj 2D Materials and Applications, Год журнала: 2025, Номер 9(1)

Опубликована: Фев. 1, 2025

MXenes are a versatile family of 2D inorganic materials with applications in energy storage, shielding, sensing, and catalysis. This review highlights computational studies using density functional theory machine-learning approaches to explore their structure (stacking, functionalization, doping), properties (electronic, mechanical, magnetic), application potential. Key advances challenges critically examined, offering insights into applying research transition these from the lab practical use.

Язык: Английский

Процитировано

4

OCT-based diagnosis of glaucoma and glaucoma stages using explainable machine learning DOI Creative Commons
Md. Mahmudul Hasan, Jack Phu, Henrietta Wang

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Янв. 28, 2025

Abstract Glaucoma poses a growing health challenge projected to escalate in the coming decades. However, current automated diagnostic approaches on diagnosis solely rely black-box deep learning models, lacking explainability and trustworthiness. To address issue, this study uses optical coherence tomography (OCT) images develop an explainable artificial intelligence (XAI) tool for diagnosing staging glaucoma, with focus its clinical applicability. A total of 334 normal 268 glaucomatous eyes (86 early, 72 moderate, 110 advanced) were included, signal processing theory was employed, model interpretability rigorously evaluated. Leveraging SHapley Additive exPlanations (SHAP)-based global feature ranking partial dependency analysis (PDA) estimated decision boundary cut-offs machine (ML) novel algorithm developed implement XAI tool. Using selected features, ML models produce AUC 0.96 (95% CI: 0.95–0.98), 0.98 0.96–1.00) 1.00 1.00–1.00) respectively differentiating moderate advanced glaucoma patients. Overall, outperformed clinicians early stage overall 10.4 –11.2% higher accuracy. The user-friendly software shows potential as valuable eye care practitioners, offering transparent interpretable insights improve decision-making.

Язык: Английский

Процитировано

3

The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study DOI Creative Commons
Jonas Wanner, Lukas-Valentin Herm, Kai Heinrich

и другие.

Electronic Markets, Год журнала: 2022, Номер 32(4), С. 2079 - 2102

Опубликована: Окт. 23, 2022

Abstract Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These have human-like capacity for selected applications based a rationale which cannot be looked-up conveniently and constitutes black box. As consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said hinder trust enforce aversion towards these systems, studies that connect user subsequently scarce. In response, our research is concerned with the development of theoretical model explains end-user We utilize unified theory use in information well explanation related theories initial The proposed tested an industrial maintenance workplace scenario using experts participants represent group. Results show performance-driven at first sight. However, plays important indirect role regulating perception performance.

Язык: Английский

Процитировано

55

Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements DOI Creative Commons
Laith Alzubaidi, Aiman Al-Sabaawi, Jinshuai Bai

и другие.

International Journal of Intelligent Systems, Год журнала: 2023, Номер 2023, С. 1 - 41

Опубликована: Окт. 26, 2023

Given the tremendous potential and influence of artificial intelligence (AI) algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse fields, including education, business, healthcare industries, government, justice sectors. While AI DM offer significant benefits, they also carry risk unfavourable outcomes for users society. As a result, ensuring safety, reliability, trustworthiness becomes crucial. This article aims to provide comprehensive review synergy between DM, focussing on importance trustworthiness. The addresses following four key questions, guiding readers towards deeper understanding this topic: (i) why do we need trustworthy AI? (ii) what are requirements In line with second question, that establish been explained, explainability, accountability, robustness, fairness, acceptance AI, privacy, accuracy, reproducibility, human agency, oversight. (iii) how can data? (iv) priorities in terms challenging applications? Regarding last six different discussed, environmental science, 5G-based IoT networks, robotics architecture, engineering construction, financial technology, healthcare. emphasises address before their deployment order achieve goal good. An example is provided demonstrates be employed eliminate bias resources management systems. insights recommendations presented paper will serve as valuable guide researchers seeking applications.

Язык: Английский

Процитировано

43

Personalising intravenous to oral antibiotic switch decision making through fair interpretable machine learning DOI Creative Commons
W. Bolton, Richard Wilson, Mark Gilchrist

и другие.

Nature Communications, Год журнала: 2024, Номер 15(1)

Опубликована: Янв. 13, 2024

Abstract Antimicrobial resistance (AMR) and healthcare associated infections pose a significant threat globally. One key prevention strategy is to follow antimicrobial stewardship practices, in particular, maximise targeted oral therapy reduce the use of indwelling vascular devices for intravenous (IV) administration. Appreciating when an individual patient can switch from IV antibiotic treatment often non-trivial not standardised. To tackle this problem we created machine learning model predict could based on routinely collected clinical parameters. 10,362 unique intensive care unit stays were extracted two informative feature sets identified. Our best achieved mean AUROC 0.80 (SD 0.01) hold-out set while being biased individuals protected characteristics. Interpretability methodologies employed create clinically useful visual explanations. In summary, our provides individualised, fair, interpretable predictions IV-to-oral treatment. Prospectively evaluation safety efficacy needed before such technology be applied clinically.

Язык: Английский

Процитировано

17

Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology DOI
Allison Chae, Michael S. Yao, Hersh Sagreiya

и другие.

Radiology, Год журнала: 2024, Номер 310(1)

Опубликована: Янв. 1, 2024

Despite recent advancements in machine learning (ML) applications health care, there have been few benefits and improvements to clinical medicine the hospital setting. To facilitate adaptation of methods ML, this review proposes a standardized framework for step-by-step implementation artificial intelligence into practice radiology that focuses on three key components: problem identification, stakeholder alignment, pipeline integration. A literature empirical evidence radiologic imaging justifies approach offers discussion structuring efforts help other practices leverage ML improve patient care. Clinical trial registration no. 04242667 © RSNA, 2024

Язык: Английский

Процитировано

15

Explainable and Interpretable Machine Learning for Antimicrobial Stewardship: Opportunities and Challenges DOI
Daniele Roberto Giacobbe, Cristina Marelli, Sabrina Guastavino

и другие.

Clinical Therapeutics, Год журнала: 2024, Номер 46(6), С. 474 - 480

Опубликована: Март 21, 2024

Язык: Английский

Процитировано

14

Recent advancements and applications of deep learning in heart failure: Α systematic review DOI
Georgios Petmezas, Vasileios E. Papageorgiou,

Vasileios Vassilikos

и другие.

Computers in Biology and Medicine, Год журнала: 2024, Номер 176, С. 108557 - 108557

Опубликована: Май 7, 2024

Язык: Английский

Процитировано

13

Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust DOI

Tsung-Yu Hou,

Yu-Chia Tseng, Chien Wen Yuan

и другие.

International Journal of Information Management, Год журнала: 2024, Номер 76, С. 102775 - 102775

Опубликована: Март 16, 2024

Язык: Английский

Процитировано

11