Enhancing winter road maintenance with explainable AI: SHAP analysis for interpreting machine learning models in road friction estimation DOI
X. X. Ding, Tae J. Kwon

Canadian Journal of Civil Engineering, Год журнала: 2024, Номер 51(5), С. 529 - 544

Опубликована: Янв. 11, 2024

Effective winter road maintenance relies on precise friction estimation. Machine learning (ML) models have shown significant promise in this; however, their inherent complexity makes understanding inner workings challenging. This paper addresses this issue by conducting a comparative analysis of estimation using four ML methods, including regression tree, random forest, eXtreme Gradient Boosting (XGBoost), and support vector (SVR). We then employ the SHapley Additive exPlanations (SHAP) explainable artificial intelligence (AI) to enhance model interpretability. Our an Alberta dataset reveals that XGBoost performs best with accuracy 91.39%. The SHAP illustrates logical relationships between predictor features within all three tree-based models, but it also uncovers inconsistencies SVR model, potentially attributed insufficient feature interactions. Thus, not only showcase role AI improving interpretability for estimation, provides practical insights could improve decisions.

Язык: Английский

Uncertainty in XAI: Human Perception and Modeling Approaches DOI Creative Commons
Teodor Chiaburu, Frank Haußer, Felix Bießmann

и другие.

Machine Learning and Knowledge Extraction, Год журнала: 2024, Номер 6(2), С. 1170 - 1192

Опубликована: Май 27, 2024

Artificial Intelligence (AI) plays an increasingly integral role in decision-making processes. In order to foster trust AI predictions, many approaches towards explainable (XAI) have been developed and evaluated. Surprisingly, one factor that is essential for has underrepresented XAI research so far: uncertainty, both with respect how it modeled Machine Learning (ML) as well perceived by humans relying on assistance. This review paper provides in-depth analysis of aspects. We established recent methods account uncertainty ML models we discuss empirical evidence model human users systems. summarize the methodological advancements limitations perception. Finally, implications current state art development believe highlighting will be helpful practitioners researchers could ultimately support more responsible use practical applications.

Язык: Английский

Процитировано

5

Managing workplace AI risks and the future of work DOI
John Howard, Paul A. Schulte

American Journal of Industrial Medicine, Год журнала: 2024, Номер unknown

Опубликована: Сен. 2, 2024

Artificial intelligence (AI)-the field of computer science that designs machines to perform tasks typically require human intelligence-has seen rapid advances in the development foundation systems such as large language models. In workplace, adoption AI technologies can result a broad range hazards and risks workers, illustrated by recent growth industrial robotics algorithmic management. Sources risk from deployment across society workplace have led numerous government private sector guidelines propose principles governing design use trustworthy ethical AI. As capabilities become integrated devices, machines, industry sectors, employers, occupational safety health practitioners will be challenged manage worker health, safety, well-being. Five management options are presented ways assure only enables machinery, processes. play significant role future work. The practice research communities need ensure promise these new results benefit, not harm, workers.

Язык: Английский

Процитировано

5

Prediction of wear amounts of AZ91 magnesium alloy matrix composites reinforced with ZnO-hBN nanocomposite particles by hybridized GA-SVR model DOI
Cevher Kürşat Macit,

Busra Tan Saatci,

M. Gökhan Albayrak

и другие.

Journal of Materials Science, Год журнала: 2024, Номер unknown

Опубликована: Сен. 26, 2024

Язык: Английский

Процитировано

5

Explainable Image Classification: The Journey So Far and the Road Ahead DOI Creative Commons
Vidhya Kamakshi, Narayanan C. Krishnan

AI, Год журнала: 2023, Номер 4(3), С. 620 - 651

Опубликована: Авг. 1, 2023

Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide comprehensive analysis of existing approaches in field XAI, focusing on tradeoff between model accuracy and interpretability. Motivated need tradeoff, conduct an extensive review literature, presenting multi-view taxonomy that offers new perspective XAI methodologies. We analyze various sub-categories methods, considering their strengths, weaknesses, practical challenges. Moreover, explore causal relationships explanations discuss dedicated explaining cross-domain classifiers. The latter is particularly important scenarios where training test data are sampled from different distributions. Drawing insights our analysis, propose future directions, including exploring explainable allied paradigms, developing evaluation metrics for both traditionally trained learning-based classifiers, applying neural architectural search techniques minimize accuracy–interpretability tradeoff. This paper provides overview state-of-the-art serving valuable resource researchers practitioners interested understanding advancing field.

Язык: Английский

Процитировано

11

Enhancing winter road maintenance with explainable AI: SHAP analysis for interpreting machine learning models in road friction estimation DOI
X. X. Ding, Tae J. Kwon

Canadian Journal of Civil Engineering, Год журнала: 2024, Номер 51(5), С. 529 - 544

Опубликована: Янв. 11, 2024

Effective winter road maintenance relies on precise friction estimation. Machine learning (ML) models have shown significant promise in this; however, their inherent complexity makes understanding inner workings challenging. This paper addresses this issue by conducting a comparative analysis of estimation using four ML methods, including regression tree, random forest, eXtreme Gradient Boosting (XGBoost), and support vector (SVR). We then employ the SHapley Additive exPlanations (SHAP) explainable artificial intelligence (AI) to enhance model interpretability. Our an Alberta dataset reveals that XGBoost performs best with accuracy 91.39%. The SHAP illustrates logical relationships between predictor features within all three tree-based models, but it also uncovers inconsistencies SVR model, potentially attributed insufficient feature interactions. Thus, not only showcase role AI improving interpretability for estimation, provides practical insights could improve decisions.

Язык: Английский

Процитировано

4