Local interpretation techniques for machine learning methods: Theoretical background, pitfalls and interpretation of LIME and Shapley values DOI Open Access
Mirka Henninger, Carolin Strobl

Опубликована: Ноя. 14, 2023

Machine learning models have recently become popular in psychological research. However, many machine lack interpretable parameters that researchers from psychology are used to parametric models, such as linear or logistic regression. To gain insights into how the model has made its predictions, different interpretation techniques been proposed. In this article, we focus on two local widely learning: Local Interpretable Model-Agnostic Explanations (LIME) and Shapley values. LIME aims at explaining predictions close neighborhood of a specific person. values can be understood measure predictor relevance contribution variables for persons. Using illustrative, simulated examples, explain idea behind Shapley, demonstrate their characteristics, discuss challenges might arise application interpretation. For LIME, choice size may impact conclusions. values, show they interpreted individually person interested jointly across The aim article is support safely use these themselves, but also critically evaluate interpretations when encounter research articles.

Язык: Английский

Current Research and Application Status of Affective Computing in Human-Computer Interaction: A Bibliometric Study DOI
Yiran Zhao, Jun Wang

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 326 - 343

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Interpreting machine learning predictions with LIME and Shapley values: theoretical insights, challenges, and meaningful interpretations DOI
Mirka Henninger, Carolin Strobl

Behaviormetrika, Год журнала: 2024, Номер unknown

Опубликована: Дек. 27, 2024

Язык: Английский

Процитировано

2

Local interpretation techniques for machine learning methods: Theoretical background, pitfalls and interpretation of LIME and Shapley values DOI Open Access
Mirka Henninger, Carolin Strobl

Опубликована: Ноя. 14, 2023

Machine learning models have recently become popular in psychological research. However, many machine lack interpretable parameters that researchers from psychology are used to parametric models, such as linear or logistic regression. To gain insights into how the model has made its predictions, different interpretation techniques been proposed. In this article, we focus on two local widely learning: Local Interpretable Model-Agnostic Explanations (LIME) and Shapley values. LIME aims at explaining predictions close neighborhood of a specific person. values can be understood measure predictor relevance contribution variables for persons. Using illustrative, simulated examples, explain idea behind Shapley, demonstrate their characteristics, discuss challenges might arise application interpretation. For LIME, choice size may impact conclusions. values, show they interpreted individually person interested jointly across The aim article is support safely use these themselves, but also critically evaluate interpretations when encounter research articles.

Язык: Английский

Процитировано

3