当前位置: X-MOL 学术American Psychologist › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unavoidable social contagion of false memory from robots to humans.
American Psychologist ( IF 16.4 ) Pub Date : 2023-11-20 , DOI: 10.1037/amp0001230
Tsung-Ren Huang,Yu-Lan Cheng,Suparna Rajaram

Many of us interact with voice- or text-based conversational agents daily, but these conversational agents may unintentionally retrieve misinformation from human knowledge databases, confabulate responses on their own, or purposefully spread disinformation for political purposes. Does such misinformation or disinformation become part of our memory to further misguide our decisions? If so, can we prevent humans from suffering such social contagion of false memory? Using a social contagion of memory paradigm, here, we precisely controlled a social robot as an example of these emerging conversational agents. In a series of two experiments (ΣN = 120), the social robot occasionally misinformed participants prior to a recognition memory task. We found that the robot was as powerful as humans at influencing others. Despite the supplied misinformation being emotion- and value-neutral and hence not intrinsically contagious and memorable, 77% of the socially misinformed words became the participants' false memory. To mitigate such social contagion of false memory, the robot also forewarned the participants about its reservation toward the misinformation. However, one-time forewarnings failed to reduce false memory contagion. Even relatively frequent, item-specific forewarnings could not prevent warned items from becoming false memory, although such forewarnings helped increase the participants' overall cautiousness. Therefore, we recommend designing conversational agents to, at best, avoid providing uncertain information or, at least, provide frequent forewarnings about potentially false information. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

中文翻译:

错误记忆不可避免地从机器人传播到人类身上。

我们中的许多人每天与基于语音或文本的会话代理进行交互,但这些会话代理可能会无意中从人类知识数据库中检索错误信息,自行编造响应,或者出于政治目的故意传播虚假信息。这些错误信息或虚假信息是否会成为我们记忆的一部分,进一步误导我们的决定?如果是这样,我们能否阻止人类遭受这种错误记忆的社会传染?在这里,我们使用记忆范式的社交传染来精确控制社交机器人,作为这些新兴对话代理的一个例子。在一系列的两个实验中(ΣN = 120),社交机器人在执行识别记忆任务之前偶尔会误导参与者。我们发现机器人在影响他人方面与人类一样强大。尽管提供的错误信息是情感和价值中立的,因此本质上不具有传染性和难忘性,但 77% 的社会错误信息成为了参与者的错误记忆。为了减轻这种错误记忆的社会传染,机器人还预先警告参与者它对错误信息的保留。然而,一次性预警并不能减少虚假记忆传染。即使相对频繁的针对特定项目的预警也无法防止被警告的项目变成错误记忆,尽管此类预警有助于提高参与者的整体谨慎性。因此,我们建议设计会话代理,最好避免提供不确定的信息,或者至少提供有关潜在虚假信息的频繁预警。(PsycInfo 数据库记录 (c) 2023 APA,保留所有权利)。
更新日期:2023-11-20
down
wechat
bug