当前位置: X-MOL 学术Journal of Experimental Social Psychology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Responsibility gaps and self-interest bias: People attribute moral responsibility to AI for their own but not others' transgressions
Journal of Experimental Social Psychology ( IF 3.532 ) Pub Date : 2023-12-20 , DOI: 10.1016/j.jesp.2023.104584
Mengchen Dong , Konrad Bocian

In the last decade, the ambiguity and difficulty of responsibility attribution to AI and human stakeholders (i.e., responsibility gaps) has been increasingly relevant and discussed in extreme cases (e.g., autonomous weapons). On top of related philosophical debates, the current research provides empirical evidence on the importance of bridging responsibility gaps from a psychological and motivational perspective. In three pre-registered studies (N = 1259), we examined moral judgments in hybrid moral situations, where both a human and an AI were involved as moral actors and arguably responsible for a moral consequence. We found that people consistently showed a self-interest bias in the evaluation of hybrid transgressions, such that they judged the human actors more leniently when they were depicted as themselves (vs. others; Studies 1 and 2) and ingroup (vs. outgroup; Study 3) members. Moreover, this bias did not necessarily emerge when moral actors caused positive (instead of negative) moral consequences (Study 2), and could be accounted for by the flexible responsibility attribution to AI (i.e., ascribing more responsibility to AI when judging the self rather than others; Studies 1 and 2). The findings suggest that people may dynamically exploit the “moral wiggle room” in hybrid moral situations and reason about AI's responsibility to serve their self-interest.



中文翻译:

责任差距和自利偏见:人们将自己的道德责任归咎于人工智能,而不是他人的违法行为

在过去的十年中,人工智能和人类利益相关者责任归属的模糊性和困难(即责任差距)在极端情况(例如自主武器)中越来越相关和讨论。除了相关的哲学辩论之外,当前的研究还提供了从心理和动机角度弥合责任差距重要性的经验证据。在三项预先注册的研究中(N  = 1259),我们检查了混合道德情境中的道德判断,其中人类和人工智能都作为道德参与者参与其中,并且可以说对道德后果负责。我们发现,人们在评估混合犯罪时始终表现出自利偏见,因此,当人类行为者被描述为自己(相对于其他人;研究1和2)和内群体(相对于外群体;研究1和2)时,他们对人类行为者的判断更加宽容。研究3)成员。此外,当道德行为者造成积极(而不是消极)道德后果时,这种偏见并不一定会出现(研究2),并且可以通过人工智能的灵活责任归属来解释(即,在判断自我时将更多责任归于人工智能,而不是比其他人更好;研究 1 和 2)。研究结果表明,人们可能会在混合道德情境中动态地利用“道德回旋余地”,并推理人工智能有责任服务于他们的自身利益。

更新日期:2023-12-21
down
wechat
bug