GPT-3.5 for Code Review Automation: How Do Few-Shot Learning, Prompt Design, and Model Fine-Tuning Impact Their Performance?

CoRR(2024)

引用 0|浏览9
暂无评分
摘要
Recently, several large language models (LLMs)-the large pre-trained models based on the transformer architecture-were proposed. Prior studies in the natural language processing field and software engineering field conducted experiments focusing on different approaches to leveraging LLMs for downstream tasks. However, the existing literature still lacks the study of different approaches to leveraging GPT-3.5 (e.g., prompt engineering, few-shot learning and model fine-tuning) for the code review automation task (i.e., automatically generating improved code from submitted code). Thus, little is known about how GPT-3.5 should be leveraged for this task. To fill this knowledge gap, we set out to investigate the impact of few-shot learning, prompt design (i.e., using a persona pattern), and model fine-tuning on GPT-3.5 for the code review automation task. Through the experimental study of the three code review automation datasets, we find that (1) when few-shot learning is performed, GPT-3.5 achieves at least 46.38 CodeBLEU than GPT-3.5 that zero-shot learning is performed, (2) when persona is included in input prompts to generate improved code, GPT-3.5 achieves at least 1.02 included in input prompts, (3) fine-tuned GPT-3.5 achieves at least 9.74 higher Exact Match and 0.12 few-shot learning is performed, and (4) fine-tuned GPT-3.5 achieves at least 11.48 Based on our experiment results, we recommend that when using GPT-3.5 for code review automation (1) few-shot learning should be performed rather than zero-shot learning, (2) persona should not be included when constructing prompts, and (3) GPT-3.5 should be fine-tuned by using a small training dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要