当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable machine learning in cybersecurity: A survey
International Journal of Intelligent Systems ( IF 7 ) Pub Date : 2022-11-03 , DOI: 10.1002/int.23088
Feixue Yan 1, 2 , Sheng Wen 1 , Surya Nepal 2 , Cecile Paris 3 , Yang Xiang 1
Affiliation  

Machine learning (ML) techniques are increasingly important in cybersecurity, as they can quickly analyse and identify different types of threats from millions of events. In spite of the increasing number of possible applications of ML, successful adoption of ML models in cybersecurity still highly relies on the explainability of those models that are used for making predictions. Explanations that support ML model outputs are crucial in cybersecurity-oriented ML applications because people need to get more information from the model than just binary output for analysis. The explainable models help ML developers solve the “trust” problem for a security application prediction in a faithful way: validating model behaviours, diagnosing misclassifications and sometimes automatically patching errors in the target models. Therefore, explainable ML for cybersecurity has become a necessary and important research branch. In this paper, we present the topic of explainable ML in cybersecurity through two general types of explanations: (1) ante hoc explanation, and (2) post hoc explanation, with their methodologies. We systematically review and categorise the state-of-the-art research, and provide comparative studies to help researchers find the optimal solutions to specific problems. We further list open issues in this field to facilitate future studies. This survey will benefit diverse groups of readers from both academia and industries, who want to effectively use ML to solve cybersecurity challenges.

中文翻译:

网络安全中的可解释机器学习:一项调查

机器学习 (ML) 技术在网络安全中越来越重要,因为它们可以快速分析和识别来自数百万事件的不同类型的威胁。尽管 ML 的可能应用数量越来越多,但 ML 模型在网络安全中的成功采用仍然高度依赖于用于进行预测的那些模型的可解释性。支持 ML 模型输出的解释在面向网络安全的 ML 应用程序中至关重要,因为人们需要从模型中获取更多信息,而不仅仅是用于分析的二进制输出。可解释的模型帮助 ML 开发人员以忠实的方式解决安全应用程序预测的“信任”问题:验证模型行为、诊断错误分类以及有时自动修补目标模型中的错误。所以,用于网络安全的可解释 ML 已成为一个必要且重要的研究分支。在本文中,我们通过两种一般类型的解释来介绍网络安全中可解释 ML 的主题:(1) 事前解释和 (2) 事后解释,以及它们的方法。我们系统地回顾和分类最先进的研究,并提供比较研究,以帮助研究人员找到特定问题的最佳解决方案。我们进一步列出了该领域的未解决问题,以促进未来的研究。这项调查将使来自学术界和工业界的不同读者群体受益,他们希望有效地使用 ML 来解决网络安全挑战。(1) 事前解释,和 (2) 事后解释,及其方法。我们系统地回顾和分类最先进的研究,并提供比较研究,以帮助研究人员找到特定问题的最佳解决方案。我们进一步列出了该领域的未解决问题,以促进未来的研究。这项调查将使来自学术界和工业界的不同读者群体受益,他们希望有效地使用 ML 来解决网络安全挑战。(1) 事前解释,和 (2) 事后解释,及其方法。我们系统地回顾和分类最先进的研究,并提供比较研究,以帮助研究人员找到特定问题的最佳解决方案。我们进一步列出了该领域的未解决问题,以促进未来的研究。这项调查将使来自学术界和工业界的不同读者群体受益,他们希望有效地使用 ML 来解决网络安全挑战。
更新日期:2022-11-03
down
wechat
bug