当前位置: X-MOL 学术Noûs › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Site of Predictive Justice
Noûs Pub Date : 2023-08-27 , DOI: 10.1111/nous.12477
Seth Lazar 1 , Jake Stone 1
Affiliation  

Optimism about our ability to enhance societal decision-making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML-based decision-making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.

中文翻译:

在预测正义的现场

近年来,人们对依靠机器学习 (ML) 进行廉价、准确的预测来增强社会决策的能力的乐观情绪已经减弱,因为这些“廉价”的预测付出了巨大的社会成本,加剧了本已处于不利地位的群体所遭受的系统性伤害人口。但是,当机器学习出错时,到底出了什么问题呢?我们认为,除了对基于机器学习的决策的下游影响更加明显的担忧之外,对这些预测本身的批评也可能有道德依据。我们引入并捍卫了预测正义理论,根据该理论,系统性弱势群体的差异模型表现可以成为对该模型进行道德批评的理由,而与其下游影响无关。除了帮助解决一些围绕算法公平性的紧急争议之外,
更新日期:2023-08-28
down
wechat
bug