Evaluating the performance of ChatGPT in answering questions related to pediatric urology

JOURNAL OF PEDIATRIC UROLOGY(2024)

引用 3|浏览7
暂无评分
摘要
Introduction Artificial intelligence is advancing in various domains, including medicine, and its progress is expected to continue in the future. Objective This research aimed to assess the precision and consistency of ChatGPT's responses to commonly asked inquiries related to pediatric urology. Materials and methods We examined commonly posed inquiries regarding pediatric urology found on urology association websites, hospitals, and social media platforms. Additionally, we referenced the recommendations tables in the European Urology Association's (EAU) 2022 Guidelines on Pediatric Urology, which contained robust data at the strong recommendation level. All questions were systematically presented to ChatGPT's May 23 Version, and two expert urologists independently assessed and assigned scores ranging from 1 to 4 to each response. Results A hundred thirty seven questions about pediatric urology were included in the study. The answers to questions resulted in 92.0% completely correct. The completely correct rate in the questions prepared according to the strong recommendations of the EAU guideline was 93.6%. No question was answered completely wrong. The similarity rates of the answers to the repeated questions were between 93.8% and 100%. Conclusion ChatGPT has provided satisfactory responses to inquiries related to pediatric urology. Despite its limitations, it is foreseeable that this continuously evolving platform will occupy a crucial position in the healthcare industry.
更多
查看译文
关键词
Artificial intelligence,Health literacy,Patient knowledge,Pediatric urology
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要