Double-Edged Sword of LLMs: Mitigating Security Risks of AI-Generated Code

Ramesh Bharadwaj, Ilya Parker

DISRUPTIVE TECHNOLOGIES IN INFORMATION SCIENCES VII(2023)

引用 0|浏览4
暂无评分
摘要
With the increasing reliance on collaborative and cloud-based systems, there is a drastic increase in attack surfaces and code vulnerabilities. Automation is key for fielding and defending software systems at scale. Researchers in Symbolic AI have had considerable success in finding flaws in human-created code. Also, run-time testing methods such as fuzzing do uncover numerous bugs. However, the major deficiency of both approaches is the inability of the methods to fix the discovered errors. They also do not scale and defy automation. Static analysis methods also suffer from the false positive problem - an overwhelming number of reported flaws are not real bugs. This brings up an interesting conundrum: Symbolic approaches actually have a detrimental impact on programmer productivity, and therefore do not necessarily contribute to improved code quality. What is needed is a combination of automation of code generation using large language models (LLMs), with scalable defect elimination methods using symbolic AI, to create an environment for the automated generation of defect-free code.
更多
查看译文
关键词
Large Language Models, Generative AI, Symbolic AI, Automatic Code Generation, Code Defect Mitigation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要