Multitask-based Evaluation of Open-Source LLM on Software Vulnerability
arxiv(2024)
摘要
This paper proposes a pipeline for quantitatively evaluating interactive LLMs
using publicly available datasets. We carry out an extensive technical
evaluation of LLMs using Big-Vul covering four different common software
vulnerability tasks. We evaluate the multitask and multilingual aspects of LLMs
based on this dataset. We find that the existing state-of-the-art methods are
generally superior to LLMs in software vulnerability detection. Although LLMs
improve accuracy when providing context information, they still have
limitations in accurately predicting severity ratings for certain CWE types. In
addition, LLMs demonstrate some ability to locate vulnerabilities for certain
CWE types, but their performance varies among different CWE types. Finally,
LLMs show uneven performance in generating CVE descriptions for various CWE
types, with limited accuracy in a few-shot setting. Overall, though LLMs
perform well in some aspects, they still need improvement in understanding the
subtle differences in code vulnerabilities and the ability to describe
vulnerabilities to fully realize their potential. Our evaluation pipeline
provides valuable insights for further enhancing LLMs' software vulnerability
handling capabilities.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要