Cordelia Ludden, Cordell Burton Jr, and Daniel Votipka, Tufts University
The use of AI assistants in various industries has increased in recent years with the development of tools such as ChatGPT and CoPilot. According to a 2023 Stack Overflow Developer Survey, approximately 70% of professional developers are using or planning to use AI tools within their development processes, highlighting the widespread adoption of these technologies in coding. While this productivity is good, it has been demonstrated that these LLMs often generate insecure code. We aimed to look at how developers view these security issues and their practices in using AI assistants in coding.
To do this we reviewed posts and comments on relevant computer science subreddits and qualitatively coded the results. Overall the relevant discussion fell into two larger categories: about using AI to write code and opinions on using AI assistants for coding related tasks. The most common response we found was that participants used, or wanted to use, these assistants to write code for their projects. While there were not many posts or comments related to the security of code there was a large volume of responses that mentioned AI assistance often generating bad code in response to people using these assistants to write code. We believe the widespread adoption of AI by developers emphasizes its role as an assistant rather than a primary tool. In addition individuals' skepticism towards AI-generated code potentially produces a benefit, when compared to other developer support services, such as Stack Overflow, which prior work has shown developers often are not skeptical of and simply copy/paste insecure code.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.