CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model

Di Peng,Jianguo Li, Hao Yu, Wei Jian, Wei Cai, Yanan Cao, Zhaoyu Chen, Dajun Chen, Hongwei Chen,Chen Liang, Gang Fan, Jianya Gong, Zi Gong, Huiying Wen,Tingting Guo,Zhichao Lei, Ting Ting Li,Zheng Li, Ming Liang, Calvin C.Y. Liao, Bingchang Liu, Jiachen Liu, Zhiwei Liu, Shaojun Lu, Min Su, G.Q. Wang,Huan Wang, Zhi Wang, Zhaogui Xu, Jing Yang, Qing Ye, Gehao Zhang, Zujiang Yu,Zelin Zhao, Xianwei Zheng, Huanjiao Jenny Zhou, Linghua Zhu,Xianying Zhu

arXiv (Cornell University)(2023)

引用 0|浏览8
暂无评分
摘要
Code Large Language Models (Code LLMs) have gained significant attention in the industry due to their wide applications in the full lifecycle of software engineering. However, the effectiveness of existing models in understanding non-English inputs for multi-lingual code-related tasks is still far from well studied. This paper introduces CodeFuse-13B, an open-sourced pre-trained code LLM. It is specifically designed for code-related tasks with both English and Chinese prompts and supports over 40 programming languages. CodeFuse achieves its effectiveness by utilizing a high quality pre-training dataset that is carefully filtered by program analyzers and optimized during the training process. Extensive experiments are conducted using real-world usage scenarios, the industry-standard benchmark HumanEval-x, and the specially designed CodeFuseEval for Chinese prompts. To assess the effectiveness of CodeFuse, we actively collected valuable human feedback from the AntGroup's software development process where CodeFuse has been successfully deployed. The results demonstrate that CodeFuse-13B achieves a HumanEval pass@1 score of 37.10%, positioning it as one of the top multi-lingual code LLMs with similar parameter sizes. In practical scenarios, such as code generation, code translation, code comments, and testcase generation, CodeFuse performs better than other models when confronted with Chinese prompts.
更多
查看译文
关键词
large language model,pretrained,multi-lingual
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要