Software Testing with Large Language Models: Survey, Landscape, and Vision
IEEE Transactions on Software Engineering(2023)
摘要
Pre-trained large language models (LLMs) have recently emerged as a
breakthrough technology in natural language processing and artificial
intelligence, with the ability to handle large-scale datasets and exhibit
remarkable performance across a wide range of tasks. Meanwhile, software
testing is a crucial undertaking that serves as a cornerstone for ensuring the
quality and reliability of software products. As the scope and complexity of
software systems continue to grow, the need for more effective software testing
techniques becomes increasingly urgent, making it an area ripe for innovative
approaches such as the use of LLMs. This paper provides a comprehensive review
of the utilization of LLMs in software testing. It analyzes 102 relevant
studies that have used LLMs for software testing, from both the software
testing and LLMs perspectives. The paper presents a detailed discussion of the
software testing tasks for which LLMs are commonly used, among which test case
preparation and program repair are the most representative. It also analyzes
the commonly used LLMs, the types of prompt engineering that are employed, as
well as the accompanied techniques with these LLMs. It also summarizes the key
challenges and potential opportunities in this direction. This work can serve
as a roadmap for future research in this area, highlighting potential avenues
for exploration, and identifying gaps in our current understanding of the use
of LLMs in software testing.
更多查看译文
关键词
Pre-trained Large Language Model,Software Testing,LLM,GPT
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要