Опубликована: Ноя. 14, 2023
Machine learning models have recently become popular in psychological research. However, many machine lack interpretable parameters that researchers from psychology are used to parametric models, such as linear or logistic regression. To gain insights into how the model has made its predictions, different interpretation techniques been proposed. In this article, we focus on two local widely learning: Local Interpretable Model-Agnostic Explanations (LIME) and Shapley values. LIME aims at explaining predictions close neighborhood of a specific person. values can be understood measure predictor relevance contribution variables for persons. Using illustrative, simulated examples, explain idea behind Shapley, demonstrate their characteristics, discuss challenges might arise application interpretation. For LIME, choice size may impact conclusions. values, show they interpreted individually person interested jointly across The aim article is support safely use these themselves, but also critically evaluate interpretations when encounter research articles.
Язык: Английский