Challenging the Chatbot: An Assessment of ChatGPT's Diagnoses and Recommendations for DBP Case Studies.

Rachel Kim,Alex Margolis, Joe Barile, Kyle Han, Saia Kalash,Helen Papaioannou, Anna Krevskaya,Ruth Milanaik

Journal of developmental and behavioral pediatrics : JDBP(2024)

引用 0|浏览2
暂无评分
摘要
OBJECTIVE:Chat Generative Pretrained Transformer-3.5 (ChatGPT) is a publicly available and free artificial intelligence chatbot that logs billions of visits per day; parents may rely on such tools for developmental and behavioral medical consultations. The objective of this study was to determine how ChatGPT evaluates developmental and behavioral pediatrics (DBP) case studies and makes recommendations and diagnoses. METHODS:ChatGPT was asked to list treatment recommendations and a diagnosis for each of 97 DBP case studies. A panel of 3 DBP physicians evaluated ChatGPT's diagnostic accuracy and scored treatment recommendations on accuracy (5-point Likert scale) and completeness (3-point Likert scale). Physicians also assessed whether ChatGPT's treatment plan correctly addressed cultural and ethical issues for relevant cases. Scores were analyzed using Python, and descriptive statistics were computed. RESULTS:The DBP panel agreed with ChatGPT's diagnosis for 66.2% of the case reports. The mean accuracy score of ChatGPT's treatment plan was deemed by physicians to be 4.6 (between entirely correct and more correct than incorrect), and the mean completeness was 2.6 (between complete and adequate). Physicians agreed that ChatGPT addressed relevant cultural issues in 10 out of the 11 appropriate cases and the ethical issues in the single ethical case. CONCLUSION:While ChatGPT can generate a comprehensive and adequate list of recommendations, the diagnosis accuracy rate is still low. Physicians must advise caution to patients when using such online sources.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要