GPT-4 Performance on Querying Scientific Publications: Reproducibility, Accuracy, and Impact of an Instruction Sheet

Kaiming Tao, Zachary A. Osman,Philip L. Tzou,Soo-Yon Rhee, Vineet Ahluwalia, Robert W. Shafer

crossref(2024)

引用 0|浏览1
暂无评分
摘要
Abstract Background: Large language models (LLMs) that could efficiently screen and identify studies fulfilling specific criteria, as well as those capable of data extraction from publications, would streamline literature reviews and enhance knowledge discovery by lessening the burden on human reviewers. Methods: We created an automated pipeline utilizing OpenAI GPT-4 32K API version “2023-05-15” to evaluate the accuracy of the LLM GPT-4 when responding to queries about published papers on HIV drug resistance (HIVDR) with and without an instruction sheet. The instruction sheet contained specialized knowledge designed to assist a person trying to answer questions about an HIVDR paper. We designed 60 questions pertaining to HIVDR and created markdown versions of 60 published HIVDR papers in PubMed. We presented the 60 papers to GPT-4 in four configurations: (1) all 60 questions simultaneously; (2) all 60 questions simultaneously with the instruction sheet; (3) each of the 60 questions individually; and (4) each of the 60 questions individually with the instruction sheet. Results: GPT-4 achieved a mean accuracy of 86.9% – 24.0% higher than when the answers to papers were permuted. The overall recall and precision were 72.5% and 87.4%, respectively. The standard deviation of three replicates for the 60 questions ranged from 0 to 5.3% with a median of 1.2%. The instruction sheet did not significantly increase GPT-4’s accuracy, recall, or precision. GPT-4 was more likely to provide false positive answers when the 60 questions were submitted individually compared to when they were submitted together. Conclusions: GPT-4 reproducibly answered 3600 questions about 60 papers on HIVDR with moderately high accuracy, recall, and precision. The inability of the instruction sheet to increase these metrics suggests that more sophisticated prompt engineering approaches or the finetuning of an open source model are required to further improve the ability of an LLM to answer questions about highly specialized HIV drug resistance papers.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要