How Secure is AI-based Coding?: A Security Analysis Using STRIDE and Data Flow Diagrams

2023 IEEE Virtual Conference on Communications (VCC)(2023)

Cited 0|Views7
No score
Abstract
The widespread adoption of Artificial Intelligence (AI)-based coding tools such as ChatGPT, Copilot, Open AI Codex, and Tabnine necessitates a comprehensive evaluation of their security measures, aiming to identify potential threats and vulnerabilities. In this paper, we present a systematic and structured approach to conducting a threat model-based security analysis using the widely accepted STRIDE threat model and data flow diagrams (DFD) for AI-based coding tools. By establishing clear system boundaries and constructing detailed data flow diagrams, the data flow within the system is visually represented. Then we apply the STRIDE threat modeling, encompassing spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege, to thoroughly examine potential threats and prioritize threats based on their impact, which allows us to develop targeted mitigation strategies. Our threat model can provide organizations with a robust methodology to ensure the security of their AI-based coding tools and proactively address emerging risks to build a trustworthy coding environment.
More
Translated text
Key words
AI-based coding,ChatGPT,Copilot,Data Flow Diagram,Security,Threat Model
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined