当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mitigating allocative tradeoffs and harms in an environmental justice data tool
Nature Machine Intelligence ( IF 23.8 ) Pub Date : 2024-02-16 , DOI: 10.1038/s42256-024-00793-y
Benjamin Q. Huynh , Elizabeth T. Chin , Allison Koenecke , Derek Ouyang , Daniel E. Ho , Mathew V. Kiang , David H. Rehkopf

Neighbourhood-level screening algorithms are increasingly being deployed to inform policy decisions. However, their potential for harm remains unclear: algorithmic decision-making has broadly fallen under scrutiny for disproportionate harm to marginalized groups, yet opaque methodology and proprietary data limit the generalizability of algorithmic audits. Here we leverage publicly available data to fully reproduce and audit a large-scale algorithm known as CalEnviroScreen, designed to promote environmental justice and guide public funding by identifying disadvantaged neighbourhoods. We observe the model to be both highly sensitive to subjective model specifications and financially consequential, estimating the effect of its positive designations as a 104% (62–145%) increase in funding, equivalent to US$2.08 billion (US$1.56–2.41 billion) over four years. We further observe allocative tradeoffs and susceptibility to manipulation, raising ethical concerns. We recommend incorporating technical strategies to mitigate allocative harm and accountability mechanisms to prevent misuse.



中文翻译:

减轻环境正义数据工具中的分配权衡和危害

社区级筛选算法越来越多地被用来为政策决策提供信息。然而,它们的潜在危害仍不清楚:算法决策因对边缘群体造成不成比例的伤害而广泛受到审查,但不透明的方法和专有数据限制了算法审计的普遍性。在这里,我们利用公开数据来完全重现和审核名为 CalEnviroScreen 的大规模算法,该算法旨在通过识别弱势社区来促进环境正义并指导公共资金。我们观察到该模型对主观模型规范高度敏感,并且具有财务后果,估计其积极指定的影响为资金增加 104%(62-145%),相当于 20.8 亿美元(15.6-24.1 亿美元)四年多了。我们进一步观察分配权衡和对操纵的敏感性,引发道德担忧。我们建议纳入技术战略以减轻分配性损害和问责机制以防止滥用。

更新日期:2024-02-19
down
wechat
bug