当前位置: X-MOL 学术WIREs Data Mining Knowl. Discov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interpretable and explainable machine learning: A methods-centric overview with concrete examples
WIREs Data Mining and Knowledge Discovery ( IF 7.8 ) Pub Date : 2023-02-28 , DOI: 10.1002/widm.1493
Ričards Marcinkevičs 1 , Julia E. Vogt 1
Affiliation  

Interpretability and explainability are crucial for machine learning (ML) and statistical applications in medicine, economics, law, and natural sciences and form an essential principle for ML model design and development. Although interpretability and explainability have escaped a precise and universal definition, many models and techniques motivated by these properties have been developed over the last 30 years, with the focus currently shifting toward deep learning. We will consider concrete examples of state-of-the-art, including specially tailored rule-based, sparse, and additive classification models, interpretable representation learning, and methods for explaining black-box models post hoc. The discussion will emphasize the need for and relevance of interpretability and explainability, the divide between them, and the inductive biases behind the presented “zoo” of interpretable models and explanation methods.

中文翻译:

可解释和可解释的机器学习:以方法为中心的概述和具体示例

可解释性和可解释性对于机器学习 (ML) 和医学、经济学、法律和自然科学中的统计应用至关重要,并且构成了 ML 模型设计和开发的基本原则。尽管可解释性和可解释性已经逃脱了一个精确和普遍的定义,但在过去 30 年中已经开发了许多受这些特性驱动的模型和技术,目前的重点正在转向深度学习。我们将考虑最先进的具体示例,包括专门定制的基于规则的稀疏和加性分类模型、可解释的表示学习以及事后解释黑盒模型的方法。讨论将强调可解释性和可解释性的必要性和相关性,它们之间的区别,
更新日期:2023-02-28
down
wechat
bug