Your Instructions Are Not Always Helpful: Assessing the Efficacy of Instruction Fine-tuning for Software Vulnerability Detection
CoRR(2024)
Abstract
Software, while beneficial, poses potential cybersecurity risks due to
inherent vulnerabilities. Detecting these vulnerabilities is crucial, and deep
learning has shown promise as an effective tool for this task due to its
ability to perform well without extensive feature engineering. However, a
challenge in deploying deep learning for vulnerability detection is the limited
availability of training data. Recent research highlights the deep learning
efficacy in diverse tasks. This success is attributed to instruction
fine-tuning, a technique that remains under-explored in the context of
vulnerability detection. This paper investigates the capability of models,
specifically a recent language model, to generalize beyond the programming
languages used in their training data. It also examines the role of natural
language instructions in enhancing this generalization. Our study evaluates the
model performance on a real-world dataset to predict vulnerable code. We
present key insights and lessons learned, contributing to understanding the
deep learning application in software vulnerability detection.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined