当前位置: X-MOL 学术Nat. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Large language models for preventing medication direction errors in online pharmacies
Nature Medicine ( IF 82.9 ) Pub Date : 2024-04-25 , DOI: 10.1038/s41591-024-02933-8
Cristobal Pais , Jianfeng Liu , Robert Voigt , Vin Gupta , Elizabeth Wade , Mohsen Bayati

Errors in pharmacy medication directions, such as incorrect instructions for dosage or frequency, can increase patient safety risk substantially by raising the chances of adverse drug events. This study explores how integrating domain knowledge with large language models (LLMs)—capable of sophisticated text interpretation and generation—can reduce these errors. We introduce MEDIC (medication direction copilot), a system that emulates the reasoning of pharmacists by prioritizing precise communication of core clinical components of a prescription, such as dosage and frequency. It fine-tunes a first-generation LLM using 1,000 expert-annotated and augmented directions from Amazon Pharmacy to extract the core components and assembles them into complete directions using pharmacy logic and safety guardrails. We compared MEDIC against two LLM-based benchmarks: one leveraging 1.5 million medication directions and the other using state-of-the-art LLMs. On 1,200 expert-reviewed prescriptions, the two benchmarks respectively recorded 1.51 (confidence interval (CI) 1.03, 2.31) and 4.38 (CI 3.13, 6.64) times more near-miss events—errors caught and corrected before reaching the patient—than MEDIC. Additionally, we tested MEDIC by deploying within the production system of an online pharmacy, and during this experimental period, it reduced near-miss events by 33% (CI 26%, 40%). This study shows that LLMs, with domain expertise and safeguards, improve the accuracy and efficiency of pharmacy operations.



中文翻译:

用于防止在线药店用药方向错误的大型语言模型

药房用药说明中的错误,例如剂量或频率的错误说明,可能会增加药物不良事件的发生机会,从而大大增加患者的安全风险。本研究探讨了如何将领域知识与能够进行复杂文本解释和生成的大型语言模型 (LLM) 相结合来减少这些错误。我们引入了 MEDIC(药物指导副驾驶),这是一个通过优先考虑处方核心临床组成部分(例如剂量和频率)的精确沟通来模拟药剂师推理的系统。它使用来自 Amazon Pharmacy 的 1,000 个专家注释和增强的说明对第一代 LLM 进行微调,以提取核心组件,并使用药学逻辑和安全护栏将它们组装成完整的说明。我们将 MEDIC 与两个基于法学硕士的基准进行了比较:一个利用 150 万个药物指导,另一个使用最先进的法学硕士。在 1,200 个专家审查的处方中,这两个基准分别比 MEDIC 记录的未遂事件(在到达患者之前发现并纠正的错误)多 1.51(置信区间 (CI) 1.03、2.31)和 4.38(CI 3.13、6.64)倍。此外,我们通过在在线药房的生产系统中部署来测试 MEDIC,在此实验期间,它减少了 33% 的未遂事件(CI 26%、40%)。这项研究表明,拥有领域专业知识和保障措施的法学硕士可以提高药房运营的准确性和效率。

更新日期:2024-04-25
down
wechat
bug