当前位置: X-MOL 学术Psychological Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interpretable machine learning for psychological research: Opportunities and pitfalls.
Psychological Methods ( IF 10.929 ) Pub Date : 2023-05-25 , DOI: 10.1037/met0000560
Mirka Henninger 1 , Rudolf Debelak 1 , Yannick Rothacher 1 , Carolin Strobl 1
Affiliation  

In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

中文翻译:

用于心理学研究的可解释机器学习:机遇和陷阱。

近年来,机器学习方法已成为心理学中越来越流行的预测方法。与此同时,心理学研究人员通常不仅对因变量进行预测感兴趣,而且对了解哪些预测变量是相关的、它们如何影响因变量以及哪些预测变量彼​​此相互作用感兴趣。然而,大多数机器学习方法都不能直接解释。支持研究人员描述机器学习技术如何进行预测的解释技术可能是实现这一目标的一种手段。我们提出了各种解释技术,并说明了它们为解释两种广泛使用的黑盒机器学习方法(作为我们的示例:随机森林和神经网络)的结果提供的机会。同时,我们还说明了某些数据设置中可能出现的潜在陷阱和误解风险。我们展示了相关预测变量以何种方式影响对预测变量效应的相关性或形状的解释,以及在哪些情况下可能会或可能不会检测到交互效应。我们在整篇文章中使用模拟教学示例,以及经验数据集来说明可视化解释客观化的方法。我们的结论是,经过批判性反思,可解释的机器学习技术可能在描述复杂的心理关系时提供有用的工具。(PsycInfo 数据库记录 (c) 2023 APA,保留所有权利)。
更新日期:2023-05-25
down
wechat
bug