“Learning Too Much About Me”: A User Study on the Security and Privacy of Generative AI Chatbots

Authors: 

Pradyumna Shome and Miuyin Marie Yong Wong, Georgia Institute of Technology

Abstract: 

Generative AI has burgeoned in the past few years, leading to highly interactive and human-like chatbots. Trained on billions of parameters and a vast corpus spanning gigabytes of the public Internet, tools like ChatGPT, Copilot, and Bard (which we refer to as generative AI chatbots) write code, draft emails, provide mental health counseling, teach us about the world, and act as mentors to help people advance their careers.

On the other hand, many communities have expressed reservations about widespread usage of such technology. Artists and writers are concerned about loss of their intellectual property rights and the potential for their work to be plagiarized. Educators are concerned about the potential for students to cheat on assignments, for bias in automated grading platforms, and for the chatbots to provide incorrect information. Medical professionals are concerned about the potential for chatbots to misdiagnose patients, and for patients to rely on inappropriate advice [9]. There is fear of the unknown, justified concerns about the potential for misuse, and worry about societal harm. As with other revolutionary advancements in society, there is pressure to adopt these tools to keep up with technology and remain competitive. Before we can bridge this gap, we must understand the status quo.

Students are likely to be early adopters of new technology. By examining their initial experiences, we can gain insights into concerns faced by young adults about to enter an AI-integrated workplace. We perform an online survey of 86 students, faculty, and staff at our university, focused on security and privacy concerns affecting chatbot use. We found that participants are well aware of the risks of data harvesting and inaccurate responses, and remain cautious in their use of AI in sensitive contexts, which we unpack in later sections.

Research Question
What security and privacy concerns do students at a large public US university have with adopting generative AI, and how can we overcome them?

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.