Can artificial intelligence provide accurate and reliable answers to cancer patients’ questions about cancer pain? Comparison of chatbots based on ESMO cancer pain guideline

memo - Magazine of European Medical Oncology(2024)

Cited 0|Views8
No score
Abstract
Summary Purpose The purpose of this study was to assess the accuracy, readability, and stability of the answers given by chatbots to patients’ questions about cancer pain management. Materials and methods Twenty questions about cancer pain were constructed based on the European Society of Medical Oncology (ESMO) cancer pain patient guide. These questions were asked to five chatbots: Google Bard (Google AI, USA), ChatGPT‑4 (Chat Generative Pre-trained Transforme, OpenAI, USA) (premium version), ChatGPT‑3.5 (free version), Perplexity (Perplexity AI, USA), and Chatsonic (WriteSonic, USA). Three medical oncologists with at least 10 years of experience evaluated the chatbots’ responses for accuracy, readability, and stability. Results ChatGPT-4 had the highest accuracy rate of 96%. Perplexity had the highest readability but the lowest accuracy (86%). Google Bard and ChatGPT‑4 were the most stable (100%) chatbots. Both versions of ChatGPT appeared to provide extensive information, but the answers only included information before September 2021. Conclusion All chatbots are insufficient to obtain accurate information for cancer patients, and the resources are quite inadequate in acquiring accurate information for cancer patients and their families, and they need further development.
More
Translated text
Key words
Artificial intelligence,Accurate information,Cancer patients,Chatbots,Supportive care
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined