当前位置: X-MOL 学术Trends Cogn. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
From task structures to world models: what do LLMs know?
Trends in Cognitive Sciences ( IF 19.9 ) Pub Date : 2024-03-04 , DOI: 10.1016/j.tics.2024.02.008
Ilker Yildirim , L.A. Paul

In what sense does a large language model (LLM) have knowledge? We answer by granting LLMs ‘instrumental knowledge’: knowledge gained by using next-word generation as an instrument. We then ask how instrumental knowledge is related to the ordinary, ‘worldly knowledge’ exhibited by humans, and explore this question in terms of the degree to which instrumental knowledge can be said to incorporate the structured world models of cognitive science. We discuss ways LLMs could recover degrees of worldly knowledge and suggest that such recovery will be governed by an implicit, resource-rational tradeoff between world models and tasks. Our answer to this question extends beyond the capabilities of a particular AI system and challenges assumptions about the nature of knowledge and intelligence.

中文翻译:

从任务结构到世界模型:法学硕士知道什么?

大语言模型(LLM)在什么意义上具有知识?我们通过授予法学硕士“工具性知识”来回答:通过使用下一代单词生成作为工具获得的知识。然后,我们询问工具性知识与人类所展示的普通“世俗知识”有何关系,并从工具性知识可以在多大程度上纳入认知科学的结构化世界模型的角度探讨这个问题。我们讨论了法学硕士恢复世俗知识程度的方法,并建议这种恢复将受到世界模型和任务之间隐含的、资源理性的权衡的控制。我们对这个问题的回答超出了特定人工智能系统的能力,并挑战了关于知识和智能本质的假设。
更新日期:2024-03-04
down
wechat
bug