当前位置: X-MOL 学术Inf. Organ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Human-AI joint task performance: Learning from uncertainty in autonomous driving systems
Information and Organization ( IF 5.387 ) Pub Date : 2024-01-30 , DOI: 10.1016/j.infoandorg.2024.100502
Panos Constantinides , Eric Monteiro , Lars Mathiassen

High uncertainty tasks such as making a medical diagnosis, judging a criminal justice case and driving in a big city have a very low margin for error because of the potentially devastating consequences for human lives. In this paper, we focus on how humans learn from uncertainty while performing a high uncertainty task with AI systems. We analyze Tesla's autonomous driving systems (ADS), a type of AI system, drawing on crash investigation reports, published reports on formal simulation tests and YouTube recordings of informal simulation tests by amateur drivers. Our empirical analysis provides insights into how varied levels of uncertainty tolerance have implications for how humans learn from uncertainty in real-time and over time to jointly perform the driving task with Tesla's ADS. Our core contribution is a theoretical model that explains human-AI joint task performance. Specifically, we show that, the interdependencies between different modes of AI use including uncontrolled automation, limited automation, expanded automation, and controlled automation are dynamically shaped through humans' learning from uncertainty. We discuss how humans move between these modes of AI use by increasing, reducing, or reinforcing their uncertainty tolerance. We conclude by discussing implications for the design of AI systems, policy into delegation in joint task performance, as well as the use of data to improve learning from uncertainty.



中文翻译:

人机联合任务表现:从自动驾驶系统的不确定性中学习

医疗诊断、刑事司法案件判决和在大城市驾驶等高度不确定性任务的出错率非常低,因为它们可能对人类生命造成毁灭性后果在本文中,我们重点关注人类在使用人工智能系统执行高度不确定性任务时如何从不确定性中学习。我们分析了特斯拉的自动驾驶系统(ADS),这是一种人工智能系统,借鉴了碰撞调查报告、已发布的正式模拟测试报告以及业余驾驶员非正式模拟测试的 YouTube 记录。我们的实证分析提供了关于不同水平的不确定性容忍度如何影响人类如何实时和随着时间的推移从不确定性中学习以与 Tesla 的 ADS 共同执行驾驶任务的影响。我们的核心贡献是解释人类与人工智能联合任务表现的理论模型。具体来说,我们表明,不同的人工智能使用模式之间的相互依赖性,包括不受控制的自动化有限的自动化扩展的自动化受控的自动化,是通过人类从不确定性中学习而动态形成的。我们讨论人类如何通过增加、减少或增强不确定性容忍度来在这些人工智能使用模式之间切换。最后,我们讨论了人工智能系统设计的影响、联合任务执行中的授权政策,以及使用数据来提高从不确定性中学习的能力。

更新日期:2024-01-31
down
wechat
bug