当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How to Build Self-Explaining Fuzzy Systems: From Interpretability to Explainability [AI-eXplained]
IEEE Computational Intelligence Magazine ( IF 9 ) Pub Date : 2024-01-08 , DOI: 10.1109/mci.2023.3328098
Ilia Stepin 1 , Muhammad Suffian 2 , Alejandro Catala 1 , Jose M. Alonso-Moral 1
Affiliation  

Fuzzy systems are known to provide not only accurate but also interpretable predictions. However, their explainability may be undermined if non-semantically grounded linguistic terms are used. Additional non-trivial challenges would arise if a prediction were to be explained counterfactually, i.e., in terms of hypothetical, non-predicted outputs. In this paper, we explore how both factual and counterfactual automated explanations can justify the output of fuzzy rule-based classifiers, and thus contribute to making them more trustworthy. Moreover, we demonstrate how end user preferences can be handled by customizing automated explanations, making them interactive, personalized, and therefore, human-centric. The full immersive article at IEEE Xplore provides detailed interactive examples for better understanding.

中文翻译:

如何构建自解释模糊系统:从可解释性到可解释性 [AI-eXplained]

众所周知,模糊系统不仅可以提供准确的预测,而且可以提供可解释的预测。然而,如果使用非语义基础的语言术语,它们的可解释性可能会受到损害。如果要反事实地解释预测,即根据假设的、非预测的输出来解释,就会出现额外的重要挑战。在本文中,我们探讨了事实和反事实的自动解释如何证明基于模糊规则的分类器的输出是合理的,从而有助于使它们更值得信赖。此外,我们还演示了如何通过定制自动化解释来处理最终用户的偏好,使其具有交互性、个性化,从而以人为本。 IEEE Xplore 上的完整沉浸式文章提供了详细的交互式示例,以便更好地理解。
更新日期:2024-01-08
down
wechat
bug