CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization

Yang Zhao,Di Huang, Chongxiao Li,Pengwei Jin,Ziyuan Nan, Tianyun Ma, Lei Qi, Yansong Pan, Zhenxing Zhang,Rui Zhang,Xishan Zhang,Zidong Du,Qi Guo,Xing Hu,Yunji Chen

arxiv(2024)

引用 0|浏览3
暂无评分
摘要
The increasing complexity and high costs associated with modern processor design have led to a surge in demand for processor design automation. Instruction-tuned large language models (LLMs) have demonstrated remarkable performance in automatically generating code for general-purpose programming languages like Python. However, these methods fail on hardware description languages (HDLs) like Verilog due to the scarcity of high-quality instruction tuning data, as even advanced LLMs like GPT-3.5 exhibit limited performance on Verilog generation. Regarding this issue, we observe that (1) Verilog code collected from the real world has higher quality than those generated by LLMs. (2) LLMs like GPT-3.5 excel in summarizing Verilog code rather than generating it. Based on these observations, this paper introduces CodeV, a series of open-source instruction-tuned Verilog generation LLMs. Instead of generating descriptions first and then getting the corresponding code from advanced LLMs, we prompt the LLM with Verilog code and let the LLM generate the corresponding natural language description by multi-level summarization. Experimental results show that CodeV relatively surpasses the previous open-source SOTA by 14.4 (BetterV in VerilogEval) and 11.3 relatively outperforms previous commercial SOTA GPT-4 by 22.1
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要