Both Patients and Plastic Surgeons prefer AI-Generated Microsurgical Information.

Charlotte Berry, Alex Z Fazilat, Christopher V Lavin,Hendrik Lintel, Naomi A Cole, Cybil S Stingl, Caleb Valencia, Annah G Morgan,Arash Momeni,Derrick C Wan

Journal of reconstructive microsurgery(2024)

引用 0|浏览2
暂无评分
摘要
BACKGROUND:With the growing relevance of AI-based patient-facing information, microsurgical-specific online information provided by professional organizations was compared to that of ChatGPT and assessed for accuracy, comprehensiveness, clarity, and readability. METHODS:Six plastic and reconstructive surgeons blindly assessed responses to ten microsurgery-related medical questions written either by American Society of Reconstructive Microsurgery (ASRM) or ChatGPT based on accuracy, comprehensiveness, and clarity. Surgeons were asked to choose which source provided the overall highest quality microsurgical patient-facing information. Additionally, 30 individuals with no medical background (ages 18-81, μ=49.8) were asked to determine a preference when blindly comparing materials. Readability scores were calculated, and all numerical scores were analyzed using the following six reliability formulas: Flesch-Kincaid Grade Level, Flesch-Kincaid Readability Ease, Gunning Fog Index, Simple Measure of Gobbledygook (SMOG) Index, Coleman-Liau Index, Linsear Write Formula (LWF), and Automated Readability Index. Statistical analysis of microsurgical-specific online sources was conducted utilizing paired t-tests. RESULTS:Statistically significant differences in comprehensiveness and clarity were seen in favor of ChatGPT. Surgeons, 70.7% of the time, blindly choose ChatGPT as the source that overall provided the highest quality microsurgical patient-facing information. Non-medical individuals 55.9% of the time selected AI-generated microsurgical materials as well. Neither ChatGPT nor ASRM-generated materials were found to contain inaccuracies. Readability scores for both ChatGPT and ASRM materials were found to exceed recommended levels for patient proficiency across six readability formulas, with AI-based material scored as more complex. CONCLUSION:AI-generated patient-facing materials were preferred by surgeons in terms of comprehensiveness and clarity when blindly compared to online material provided by ASRM. Studied AI-generated material was not found to contain inaccuracies. Additionally, surgeons and non-medical individuals consistently indicated an overall preference for AI-generated material. A readability analysis suggested that both materials sourced from ChatGPT and ASRM surpassed recommended reading levels across six readability scores.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要