当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fairness optimisation with multi-objective swarms for explainable classifiers on data streams
Complex & Intelligent Systems ( IF 5.8 ) Pub Date : 2024-04-03 , DOI: 10.1007/s40747-024-01347-w
Diem Pham , Binh Tran , Su Nguyen , Damminda Alahakoon , Mengjie Zhang

Recently, advanced AI systems equipped with sophisticated learning algorithms have emerged, enabling the processing of extensive streaming data for online decision-making in diverse domains. However, the widespread deployment of these systems has prompted concerns regarding potential ethical issues, particularly the risk of discrimination that can adversely impact certain community groups. This issue has been proven to be challenging to address in the context of streaming data, where data distribution can change over time, including changes in the level of discrimination within the data. In addition, transparent models like decision trees are favoured in such applications because they illustrate the decision-making process. However, it is essential to keep the models compact because the explainability of large models can diminish. Existing methods usually mitigate discrimination at the cost of accuracy. Accuracy and discrimination, therefore, can be considered conflicting objectives. Current methods are still limited in controlling the trade-off between these conflicting objectives. This paper proposes a method that can incrementally learn classification models from streaming data and automatically adjust the learnt models to balance multi-objectives simultaneously. The novelty of this research is to propose a multi-objective algorithm to maximise accuracy, minimise discrimination and model size simultaneously based on swarm intelligence. Experimental results using six real-world datasets show that the proposed algorithm can evolve fairer and simpler classifiers while maintaining competitive accuracy compared to existing state-of-the-art methods tailored for streaming data.



中文翻译:

使用多目标群进行公平优化,以实现数据流上可解释的分类器

最近,配备复杂学习算法的先进人工智能系统已经出现,能够处理大量流数据,以便在不同领域进行在线决策。然而,这些系统的广泛部署引发了人们对潜在道德问题的担忧,特别是可能对某些社区群体产生不利影响的歧视风险。事实证明,在流数据的背景下解决这个问题具有挑战性,因为数据分布可能会随着时间的推移而变化,包括数据中歧视程度的变化。此外,决策树等透明模型在此类应用中受到青睐,因为它们说明了决策过程。然而,保持模型紧凑至关重要,因为大型模型的可解释性可能会降低。现有方法通常以牺牲准确性为代价来减轻歧视。因此,准确性和区分度可以被视为相互冲突的目标。目前的方法在控制这些相互冲突的目标之间的权衡方面仍然受到限制。本文提出了一种方法,可以从流数据中增量学习分类模型,并自动调整学习的模型以同时平衡多目标。这项研究的新颖之处在于提出了一种基于群体智能的多目标算法,可以同时最大化准确性、最小化歧视和模型大小。使用六个真实世界数据集的实验结果表明,与现有的针对流数据定制的最先进方法相比,所提出的算法可以演化出更公平、更简单的分类器,同时保持有竞争力的准确性。

更新日期:2024-04-03
down
wechat
bug