Traditional Chinese Medicine Formula Classification Using Large Language Models.

Zhe Wang,Keqian Li, Quanying Ren,Keyu Yao,Yan Zhu

2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)(2023)

引用 0|浏览19
暂无评分
摘要
Objective: In this study, we aim to investigate the utilization of large language models (LLMs) for traditional Chinese medicine (TCM) formula classification by fine-tuning the LLMs and prompt template. Methods: We refined and cleaned the data from the Coding Rules for Chinese Medicinal Formulas and Their Codes [1], the Chinese National Medical Insurance Catalog for Proprietary Chinese Medicines [2], and Textbooks of Formulas of Chinese Medicine [3] to address the standardization of TCM formula information, and finally we extracted 2308 TCM formula data as a dataset in this study. We designed a prompt template for the TCM formula classification task and randomly divided the formula dataset into three subsets: a training set (2000 formulas), a test set (208 formulas), and a validation set (100 formulas). We fine-tuned the open-source LLMs such as ChatGLM-6b and ChatGLM2-6b. Finally, we evaluate all selected LLMs in our study: ChatGLM-6b (original), ChatGLM2-6b (original), ChatGLM-130b, InternLM-20b, ChatGPT, ChatGLM-6b (fine-tuned), and ChatGLM2-6b (fine-tuned). Results: The results showed that ChatGLM2-6b (fine-tuned) and ChatGLM-6b (fine-tuned) achieved the highest accuracy rates of 71% and 70% on the validation set, respectively. The accuracy rates of other models were ChatGLM-130b 58%, ChatGPT 53%, InternLM-20b 52%, ChatGLM2-6b (original) 41%, and ChatGLM-6b (original) 23%. Conclusion: LLMs achieved an impressive 71% accuracy in the formula classification task in our study. This was achieved through fine-tuning and the utilization of prompt templates. And provided a novel option for the utilization of LLMs in the field of TCM.
更多
查看译文
关键词
Large Language Models,Formula classification,Traditional Chinese Medicine,Fine-tuning for TCM,Prompt template for TCM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要