Automatic Static Vulnerability Detection for Machine Learning Libraries: Are We There Yet?

2023 IEEE 34th International Symposium on Software Reliability Engineering (ISSRE)(2023)

引用 0|浏览7
暂无评分
摘要
Automatic detection of software security vulnerabilities is critical in software quality assurance. Many static analysis tools that can help detect security vulnerabilities have been proposed. While these static analysis tools are mainly evaluated on general software projects call into question their practical effectiveness and usefulness for Machine Learning (ML) libraries. In this paper, we address this question by analyzing five popular and widely used static analysis tools, i.e., Flawfinder, RATS, Cppcheck, Facebook Infer, and Clang static analyzer, on a curated dataset of software security vulnerabilities gathered from four popular ML libraries, including Mlpack, MXNet, PyTorch, and TensorFlow, with a total of 410 known vulnerabilities. Our research categorizes these tools’ capabilities to understand better the strengths and weaknesses of the tools for detecting software security vulnerabilities in ML libraries. Overall, our study shows that static analysis tools find a negligible amount of all security vulnerabilities accounting for 5/410 unique vulnerabilities (0.01%), Flawfinder and RATS are the most effective static checkers for finding software security vulnerabilities in ML libraries. We further identify and discuss opportunities to make the tools more effective and practical based on our observations.
更多
查看译文
关键词
Software vulnerabilities,static detection,machine learning libraries
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要