All sessions will be held in Salon E unless otherwise noted.
Papers are available for download below to registered attendees now. The papers and the full proceedings will be available to everyone beginning Monday, August 12, 2024. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].
Proceedings Front Matter
Proceedings Cover |
Title Page, Copyright Page, and List of Organizers |
Message from the Program Co-Chairs | Table of Contents
9:00 am–9:30 am
Opening Remarks, Twenty-Year Retrospective, and Awards
General Chairs: Patrick Gage Kelley, Google, and Apu Kapadia, Indiana University Bloomington
9:30 am–11:00 am
Expert Community
Session Chair: Peter Mayer, University of Southern Denmark
A Survey of Cybersecurity Professionals’ Perceptions and Experiences of Safety and Belonging in the Community
Samantha Katcher, Liana Wang, and Caroline Yang, Tufts University; Chloé Messdaghi, SustainCyber; Michelle L. Mazurek, University of Maryland; Marshini Chetty, University of Chicago; Kelsey R. Fulton, Colorado School of Mines; Daniel Votipka, Tufts University
The cybersecurity workforce lacks diversity; the field is predominately men and White or Asian, with only 10% identifying as women, Latine, or Black. Previous studies identified access to supportive communities as a possible disparity between marginalized and non-marginalized cybersecurity professional populations and highlighted this support as a key to career success. We focus on these community experiences by conducting a survey of 342 cybersecurity professionals to identify differences in perceptions and experiences of belonging across demographic groups. Our results show a discrepancy between experiences for different gender identities with women being more likely than men to report experiencing harassment and unsupportive environments because of their gender. Psychological safety was low across all demographic groups, meaning participants did not feel comfortable engaging with or speaking up in the community. Based on these result we provide recommendations to community leaders.
Evaluating the Usability of Differential Privacy Tools with Data Practitioners
Ivoline C. Ngong, Brad Stenger, Joseph P. Near, and Yuanyuan Feng, University of Vermont
Differential privacy (DP) has become the gold standard in privacy-preserving data analytics, but implementing it in realworld datasets and systems remains challenging. Recently developed DP tools aim to make DP implementation easier, but limited research has investigated these DP tools’ usability. Through a usability study with 24 US data practitioners with varying prior DP knowledge, we evaluated the usability of four open-source Python-based DP tools: DiffPrivLib, Tumult Analytics, PipelineDP, and OpenDP. Our study results suggest that these DP tools moderately support data practitioners’ DP understanding and implementation; that Application Programming Interface (API) design and documentation are vital for successful DP implementation and user satisfaction. We provide evidence-based recommendations to improve DP tools’ usability to broaden DP adoption.
Navigating Autonomy: Unveiling Security Experts' Perspectives on Augmented Intelligence in Cybersecurity
Neele Roch, Hannah Sievers, Lorin Schöni, and Verena Zimmermann, ETH Zurich
The rapidly evolving cybersecurity threat landscape and shortage of skilled professionals are amplifying the need for technical support. AI tools offer great opportunities to support security experts by augmenting their intelligence and allowing them to focus on their unique human skills and expertise. For the successful design of AI tools and expert-AI interfaces, however, it is essential to understand the specialised security-critical context and the experts' requirements. To this end, 27 in-depth interviews with security experts, mostly in high-level managerial roles, were conducted and analysed using a grounded theory approach. The interviews showed that experts assigned tasks to AI, humans, or the human-AI team according to the skills they attributed to them. However, deciding how autonomously an AI tool should be able to perform tasks is a challenge that requires experts to weigh up factors such as trust, type of task, benefits, and risks. The resulting decision framework enhances understanding of the interplay between trust in AI, especially influenced by its transparency, and different levels of autonomy. As these factors affect the adoption of AI and the success of expert-AI collaboration in cybersecurity, it is important to further investigate them in the context of experts' AI-related decision-making processes.
Comparing Malware Evasion Theory with Practice: Results from Interviews with Expert Analysts
Miuyin Yong Wong, Matthew Landen, Frank Li, Fabian Monrose, and Mustaque Ahamad, Georgia Institute of Technology
Malware analysis is the process of identifying whether certain software is malicious and determining its capabilities. Unfortunately, malware authors have developed increasingly sophisticated ways to evade such analysis. While a significant amount of research has been aimed at countering a spectrum of evasive techniques, recent work has shown that analyzing malware that employs evasive behaviors remains a daunting challenge. To determine whether gaps exist between evasion techniques addressed by research and challenges faced by practitioners, we conduct a systematic mapping of evasion countermeasures published in research and juxtapose it with a user study on the analysis of evasive malware with 24 expert malware analysts from 15 companies as participants. More specifically, we aim to understand (i) what malware evasion techniques are being addressed by research, (ii) what are the most challenging evasion techniques malware analysts face in practice, (iii) what are common methods analysts use to counter such techniques, and (iv) whether evasion countermeasures explored by research align with challenges faced by analysts in practice. Our study shows that there are challenging evasion techniques highlighted by study participants that warrant further study by researchers. Additionally, our findings highlight the need for investigations into the barriers hindering the transition of extensively researched countermeasures into practice. Lastly, our study enhances the understanding of the limitations of current automated systems from the perspective of expert malware analysts. These contributions suggest new research directions that could help address the challenges posed by evasive malware.
Write, Read, or Fix? Exploring Alternative Methods for Secure Development Studies
Kelsey R. Fulton, Colorado School of Mines; Joseph Lewis, University of Maryland; Nathan Malkin, New Jersey Institute of Technology; Michelle L. Mazurek, University of Maryland
When studying how software developers perform security tasks, researchers often ask participants to write code. These studies can be challenging because programming can be time-consuming and frustrating. This paper explores whether alternatives to code-writing can yield scientifically valid results while reducing participant stress. We conducted a remote study in which Python programmers completed two encryption tasks using an assigned library by either writing code from scratch, reading existing code and identifying issues, or fixing issues in existing code. We found that the read and fix conditions were less effective than the write condition in revealing security problems with APIs and their documentation, but still provided useful insights. Meanwhile, the read and especially fix conditions generally resulted in more positive participant experiences. Based on these findings, we make preliminary recommendations for how and when researchers might best use all three study design methods; we also recommend future work to further explore the uses and trade-offs of these approaches.
Evaluating Privacy Perceptions, Experience, and Behavior of Software Development Teams
Maxwell Prybylo and Sara Haghighi, University of Maine; Sai Teja Peddinti, Google; Sepideh Ghanavati, University of Maine
With the increase in the number of privacy regulations, small development teams are forced to make privacy decisions on their own. In this paper, we conduct a mixed-method survey study, including statistical and qualitative analysis, to evaluate the privacy perceptions, practices, and knowledge of members involved in various phases of the Software Development Life Cycle (SDLC). Our survey includes 362 participants from 23 countries, encompassing roles such as product managers, developers, and testers. Our results show diverse definitions of privacy across SDLC roles, emphasizing the need for a holistic privacy approach throughout SDLC. We find that software teams, regardless of their region, are less familiar with privacy concepts (such as anonymization), relying on self-teaching and forums. Most participants are more familiar with GDPR and HIPAA than other regulations, with multi-jurisdictional compliance being their primary concern. Our results advocate the need for role-dependent solutions to address the privacy challenges, and we highlight research directions and educational takeaways to help improve privacy-aware SDLC.
11:00 am–11:30 am
Coffee and Tea Break
Salon E Foyer
11:30 am–12:30 pm
IoT and Privacy
Session Chair: Maximilian Golla, CISPA Helmholtz Center for Information Security
Privacy Communication Patterns for Domestic Robots
Maximiliane Windl, LMU Munich and Munich Center for Machine Learning (MCML); Jan Leusmann, LMU Munich; Albrecht Schmidt, LMU Munich and Munich Center for Machine Learning (MCML); Sebastian S. Feger, LMU Munich and Rosenheim Technical University of Applied Sciences; Sven Mayer, LMU Munich and Munich Center for Machine Learning (MCML)
IAPP SOUPS Privacy Award
Future domestic robots will become integral parts of our homes. They will have various sensors that continuously collect data and varying locomotion and interaction capabilities, enabling them to access all rooms and physically manipulate the environment. This raises many privacy concerns. We investigate how such concerns can be mitigated, using all possibilities enabled by the robot’s novel locomotion and interaction abilities. First, we found that privacy concerns increase with advanced locomotion and interaction capabilities through an online survey (N = 90). Second, we conducted three focus groups (N = 22) to construct 86 patterns to communicate the states of microphones, cameras, and the internet connectivity of domestic robots. Lastly, we conducted a large-scale online survey (N = 1720) to understand which patterns perform best regarding trust, privacy, understandability, notification qualities, and user preference. Our final set of communication patterns will guide developers and researchers to ensure a privacy-preserving future with domestic robots.
Exploring Expandable-Grid Designs to Make iOS App Privacy Labels More Usable
Shikun Zhang and Lily Klucinec, Carnegie Mellon University; Kyerra Norton, Washington University in St. Louis; Norman Sadeh and Lorrie Faith Cranor, Carnegie Mellon University
People value their privacy but often lack the time to read privacy policies. This issue is exacerbated in the context of mobile apps, given the variety of data they collect and limited screen space for disclosures. Privacy nutrition labels have been proposed to convey data practices to users succinctly, obviating the need for them to read a full privacy policy. In fall 2020, Apple introduced privacy labels for mobile apps, but research has shown that these labels are ineffective, partly due to their complexity, confusing terminology, and suboptimal information structure. We propose a new design for mobile app privacy labels that addresses information layout challenges by representing data collection and use in a color-coded, expandable grid format. We conducted a between-subjects user study with 200 Prolific participants to compare user performance when viewing our new label against the current iOS label. Our findings suggest that our design significantly improves users' ability to answer key privacy questions and reduces the time required for them to do so.
Privacy Requirements and Realities of Digital Public Goods
Geetika Gopi and Aadyaa Maddi, Carnegie Mellon University; Omkhar Arasaratnam, OpenSSF; Giulia Fanti, Carnegie Mellon University
In the international development community, the term “digital public goods” is used to describe open-source digital products (e.g., software, datasets) that aim to address the United Nations (UN) Sustainable Development Goals. DPGs are increasingly being used to deliver government services around the world (e.g., ID management, healthcare registration). Because DPGs may handle sensitive data, the UN has established user privacy as a first-order requirement for DPGs. The privacy risks of DPGs are currently managed in part by the DPG standard, which includes a prerequisite questionnaire with questions designed to evaluate a DPG’s privacy posture.
This study examines the effectiveness of the current DPG standard for ensuring adequate privacy protections. We present a systematic assessment of responses from DPGs regarding their protections of users’ privacy. We also present in-depth case studies from three widely-used DPGs to identify privacy threats and compare this to their responses to the DPG standard. Our findings reveal serious limitations in the current DPG standard’s evaluation approach. We conclude by presenting preliminary recommendations and suggestions for strengthening the DPG standard as it relates to privacy. Additionally, we hope this study encourages more usable privacy research on communicating privacy, not only to end users but also third-party adopters of user-facing technologies.
Well-intended but half-hearted: Hosts’ consideration of guests’ privacy using smart devices on rental properties
Sunyup Park, University of Maryland, College Park; Weijia He, Dartmouth College; Elmira Deldari, University of Maryland, Baltimore County; Pardis Emami-Naeini, Duke University; Danny Yuxing Huang, New York University; Jessica Vitak, University of Maryland, College Park; Yaxing Yao, Virginia Tech; Michael Zimmer, Marquette University
The increased use of smart home devices (SHDs) on short-term rental (STR) properties raises privacy concerns for guests. While previous literature identifies guests' privacy concerns and the need to negotiate guests' privacy preferences with hosts, there is a lack of research from the hosts' perspectives. This paper investigates if and how hosts consider guests' privacy when using their SHDs on their STRs, to understand hosts' willingness to accommodate guests' privacy concerns, a starting point for negotiation. We conducted online interviews with 15 STR hosts (e.g., Airbnb/Vrbo), finding that they generally use, manage, and disclose their SHDs in ways that protect guests' privacy. However, hosts' practices fell short of their intentions because of competing needs and goals (i.e., protecting their property versus protecting guests' privacy). Findings also highlight that hosts do not have proper support from the platforms on how to navigate these competing goals. Therefore, we discuss how to improve platforms' guidelines/policies to prevent and resolve conflicts with guests and measures to increase engagement from both sides to set ground for negotiation.
12:30 pm–2:00 pm
Monday Luncheon and Mentoring Tables
Salon F
2:00 pm–3:00 pm
Authentication and Authorization
Session Chair: Kent Seamons, Brigham Young University
Batman Hacked My Password: A Subtitle-Based Analysis of Password Depiction in Movies
Maike M. Raphael, Leibniz University Hannover; Aikaterini Kanta, University of Portsmouth; Rico Seebonn and Markus Dürmuth, Leibniz University Hannover; Camille Cobb, University of Illinois Urbana-Champaign
Password security is and will likely remain an issue that non-experts have to deal with. It is therefore important that they understand the criteria of secure passwords and the characteristics of good password behavior. Related literature indicates that people often acquire knowledge from media such as movies, which influences their perceptions about cybersecurity including their mindset about passwords. We contribute a novel approach based on subtitles and an analysis of the depiction of passwords and password behavior in movies. We scanned subtitles of 97,709 movies from 1960 to 2022 for password appearance and analyzed resulting scenes from 2,851 movies using mixed methods to show what people could learn from watching movies. Selected films were viewed for an in-depth analysis.
Among other things, we find that passwords are often portrayed as weak and easy to guess, but there are different contexts of use with very strong passwords. Password hacking is frequently depicted as unrealistically powerful, potentially leading to a sense of helplessness and futility of security efforts. In contrast, password guessing is shown as quite realistic and with a lower (but still overestimated) success rate. There appears to be a lack of best practices as password managers and multi-factor authentication are practically non-existent.
Understanding How People Share Passwords
Phoebe Moh and Andrew Yang, University of Maryland; Nathan Malkin, New Jersey Institute of Technology; Michelle L. Mazurek, University of Maryland
Many systems are built around the assumption that one account corresponds to one user. Likewise, password creation and management is often studied in the context of single-user accounts. However, account and credential sharing is commonplace, and password generation has not been thoroughly investigated in accounts shared among multiple users. We examine account sharing behaviors, as well as strategies and motivations for creating shared passwords, through a census-representative survey of U.S. users (n = 300). We found that password creation for shared accounts tends to be an individual, rather than collaborative, process. While users tend to have broadly similar password creation strategies and goals for both their personal and shared accounts, they sometimes make security concessions in order to improve password usability and account accessibility in shared accounts. Password reuse is common among accounts collectively shared within a group, and almost a third of our participants either directly reuse or reuse a variant of a personal account password on a shared account. Based on our findings, we make recommendations for developers to facilitate safe sharing practices.
Digital Nudges for Access Reviews: Guiding Deciders to Revoke Excessive Authorizations
Thomas Baumer, Nexis GmbH; Tobias Reittinger, Universität Regensburg; Sascha Kern, Nexis GmbH; Günther Pernul, Universität Regensburg
Organizations tend to over-authorize their members, ensuring smooth operations. However, these excessive authorizations offer a substantial attack surface and are the reason regulative authorities demand periodic checks of their authorizations. Thus, organizations conduct time-consuming and costly access reviews to verify these authorizations by human decision-makers. Still, these deciders only marginally revoke authorizations due to the poor usability of access reviews. In this work, we apply digital nudges to guide human deciders during access reviews to tackle this issue and improve security. In detail, we formalize the access review problem, interview experts (n=10) to identify several nudges helpful for access reviews, and conduct a user study (n=102) for the Choice Defaults Nudge. We show significant behavior changes in revoking authorizations. We also achieve time savings and less stress. However, we also found that improving the overall quality requires more advanced means. Finally, we discuss design implications for access reviews with digital nudges.
Can Johnny be a whistleblower? A qualitative user study of a social authentication Signal extension in an adversarial scenario
Maximilian Häring and Julia Angelika Grohs, University of Bonn; Eva Tiefenau, Fraunhofer FKIE; Matthew Smith, University of Bonn and Fraunhofer FKIE; Christian Tiefenau, University of Bonn
To achieve a higher level of protection against person-in-the-middle attacks when using common chat apps with end-to-end encryption, each chat partner can verify the other party's key material via an out-of-band channel. This procedure of verifying the key material is called an authentication ceremony (AC) and can consist of, e.g., comparing textual representations, scanning QR codes, or using third party social accounts. In the latter, a user can establish trust by proving that they have access to a particular social media account. A study has shown that such social authentication's usability can be very good; however, the study focused exclusively on secure cases, i.e., the authentication ceremonies were never attacked. To evaluate whether social authentication remains usable and secure when attacked, we implemented an interface for a recently published social authentication protocol called SOAP. We developed a study design to compare authentication ceremonies, conducted a qualitative user study with an attack scenario, and compared social authentication to textual and QR code authentication ceremonies. The participants took on the role of whistleblowers and were tasked with verifying the identities of journalists. In a pilot study, three out of nine participants were caught by the government due to SOAP, but with an improved interface, this number was reduced to one out of 18 participants. Our results indicate that social authentication can lead to more secure behavior compared to more traditional authentication ceremonies and that the scenario motivated participants to reason about their decisions.
3:00 pm–3:30 pm
Lightning Talks
3:30 pm–4:00 pm
Coffee and Tea Break
Salon E Foyer
4:00 pm–5:00 pm
SOUPS Retrospective Panel
Reflecting on Twenty Years of Usable Privacy and Security
Moderator: Patrick Gage Kelley, Google Panelists: Lorrie Faith Cranor, Carnegie Mellon University; Simson Garfinkel, BasisTech, LLC and Harvard University; Robert Biddle, Carleton University; Mary Ellen Zurko, MIT Lincoln Laboratory; Katharina Krombholz, CISPA Helmholtz Center for Information Security
Lorrie Faith Cranor, Carnegie Mellon University
Lorrie Faith Cranor (lorrie.cranor.org) is the Director and Bosch Distinguished Professor in Security and Privacy Technologies of CyLab and the FORE Systems University Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University. She directs the CyLab Usable Privacy and Security Laboratory (CUPS) and co-directs the Privacy Engineering masters program. In 2016 she served as Chief Technologist at the US Federal Trade Commission. She is also a co-founder of Wombat Security Technologies, Inc., a security awareness training company that was acquired by Proofpoint. She founded the Symposium On Usable Privacy and Security (SOUPS) and co-founded the Conference on Privacy Engineering Practice and Respect (PEPR). She has served on a number of boards, including the Electronic Frontier Foundation Board of Directors, the Electronic Privacy Information Center Advisory Board, the Computing Research Association Board of Directors, and the Aspen Institute Cybersecurity Group. She was elected to the ACM CHI Academy and named a Fellow of IEEE, ACM, and AAAS. She was previously a researcher at AT&T-Labs Research. She holds a doctorate in Engineering and Policy from Washington University in St. Louis. In 2012–2013 she spent her sabbatical as a fellow in the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University, where she worked on fiber arts projects, including a quilted visualization of bad passwords that was featured in Science Magazine as well as a bad passwords dress that she frequently wears when talking about her research. She plays soccer, walks to work, sews her own clothing with pockets, and tries not to embarrass her three young adult children.
Simson Garfinkel, BasisTech, LLC and Harvard University
Dr. Simson Garfinkel researches and writes at the intersection of AI, privacy, and digital forensics. He is a fellow of the AAAS, the ACM and the IEEE. He earned his PhD in Computer Science at MIT and a Master of Science in Journalism at Columbia University.
Robert Biddle, Carleton University
Robert Biddle is Professor of Computer Science and Cognitive Science at Carleton University in Ottawa, Canada. His research has always concerned human factors in Computer Science, drawing on principles and methods from cognitive and social sciences. The topics addressed have ranged from programming language design, to software development, and especially cybersecurity. His undergraduate studies were in Mathematics, Computer Science, and Education, and his Masters and Doctoral studies were in Computer Science. He is dual citizen of Canada and New Zealand, and his education and academic career has been in both countries. He has awards for research, teaching, and graduate mentorship. Robert is a Fellow of the New Zealand Computer Society, and a British Commonwealth Scholar.
Mary Ellen Zurko, MIT Lincoln Laboratory
Mary Ellen Zurko is a technical staff member at the Massachusetts Institute of Technology (MIT) Lincoln Laboratory. She has worked in research, product prototyping and development, and has more than 20 patents. She defined the field of user-centered security in 1996, and has worked in cybersecurity for over 35 years. She was the security architect of one of IBM’s earliest clouds, and a founding member of NASEM’s Forum on Cyber Resilience. She serves as a Distinguished Expert for NSA’s Best Scientific Cybersecurity Research Paper competition, and is on the NASEM committee identifying the key Cyber Hard Problems for our nation. Her research interests include unusable security for attackers, Zero Trust architectures for government systems, security development and code security, authorization policies, high-assurance virtual machine monitors, the web, and PKI. Zurko received a S.B. and S.M. in computer science from MIT. She has been the only “Mary Ellen Zurko” on the web for over 25 years.
5:15 pm–6:30 pm
Poster Session and Reception
Salon ABF
Check out the cool new ideas and the latest preliminary research on display at the SOUPS Poster Session and Reception. View the list of accepted posters.
9:00 am–10:30 am
Online Community
Session Chair: Sauvik Das, Carnegie Mellon University
How Entertainment Journalists Manage Online Hate and Harassment
Noel Warford, Oberlin College; Nicholas Farber and Michelle L. Mazurek, University of Maryland
While most prior literature on journalists and digital safety focuses on political journalists, entertainment journalists (who cover video games, TV, movies, etc.) also experience severe digital-safety threats in the form of persistent harassment. In the decade since the #GamerGate harassment campaign against video games journalists and developers, entertainment journalists have, by necessity, developed strategies to manage this harassment. However, the impact of harassment and the efficacy of these strategies is understudied. In this work, we interviewed nine entertainment journalists to understand their experiences with online hate and harassment and their strategies for managing it. These journalists see harassment as a difficult and inevitable part of their job; they rely primarily on external support rather than technical solutions or platform affordances. These findings suggest much more support is needed to reduce the individual burden of managing harassment.
'Custodian of Online Communities': How Moderator Mutual Support in Communities Help Fight Hate and Harassment Online
Madiha Tabassum, Northeastern University; Alana Mackey, Wellesley College; Ada Lerner, Northeastern University
Volunteer moderators play a crucial role in safeguarding online communities, actively combating hate, harassment, and inappropriate content while enforcing community standards. Prior studies have examined moderation tools and practices, moderation challenges, and the emotional labor and burnout of volunteer moderators. However, researchers have yet to delve into the ways moderators support one another in combating hate and harassment within the communities they moderate through participation in meta-communities of moderators. To address this gap, we have conducted a qualitative content analysis of 115 hate and harassment-related threads from r/ModSupport and r/modhelp, two major subreddit forums for moderators for this type of mutual support. Our study reveals that moderators seek assistance on topics ranging from fighting attacks to understanding Reddit policies and rules to just venting their frustration. Other moderators respond to these requests by validating their frustration and challenges, showing emotional support, and providing information and tangible resources to help with their situation. Based on these findings, we share the implications of our work in facilitating platform and peer support for online volunteer moderators on Reddit and similar platforms.
Designing the Informing Process with Streamers and Bystanders in Live Streaming
Yanlai Wu, University of Central Florida; Xinning Gui, The Pennsylvania State University; Yuhan Luo, City University of Hong Kong; Yao Li, University of Central Florida
The ubiquity of synchronous information disclosure technologies (e.g., live streaming) has heightened the risk of bystanders being unknowingly captured. While prior work has largely focused on solutions aimed only at informing the key stakeholder - bystanders, there remains a gap in understanding how device owners and bystanders mutually expect the informing process, which is critical to ensure successful informing. To address this gap, we utilized live streaming as a case study and conducted a design ideation study with 21 participants, including both streamers and bystanders. Our focus was to understand streamers' and bystanders' needs for informing regarding bystander privacy at the ideation state and derive design principles. Participants' design ideas reflected various and nuanced privacy concerns, from which we identified key design principles for future design.
"It was honestly just gambling": Investigating the Experiences of Teenage Cryptocurrency Users on Reddit
Elijah Bouma-Sims, Hiba Hassan, Alexandra Nisenoff, Lorrie Faith Cranor, and Nicolas Christin, Carnegie Mellon University
Despite fears that minors may use unregulated cryptocurrency exchanges to gain access to risky investments, little is known about the experience of underage cryptocurrency users. To learn how teenagers access digital assets and the risks they encounter while using them, we conducted a multi-stage, inductive content analysis of 1,676 posts made to teenage communities on Reddit containing keywords related to cryptocurrency. We identified 1,409 (84.0%) posts that meaningfully discussed cryptocurrency, finding that teenagers most often use accounts in their parents' names to purchase cryptocurrencies, presumably to avoid age restrictions. Teenagers appear motivated to invest by the potential for relatively large, short-term profits, but some discussed a sense of entertainment, ideological motivation, or an interest in technology. We identified many of the same harms adult users of digital assets encountered, including investment loss, victimization by fraud, and loss of keys. We discuss the implications of our results in the context of the ongoing debates over cryptocurrency regulation.
"I can say I'm John Travolta...but I'm not John Travolta": Investigating the Impact of Changes to Social Media Verification Policies on User Perceptions of Verified Accounts
Carson Powers, Nickolas Gravel, and Christopher Pellegrini, Tufts University; Micah Sherr, Georgetown University; Michelle L. Mazurek, University of Maryland; Daniel Votipka, Tufts University
Until recently, almost all social media platforms verified the identities behind notable accounts. Prior work showed users understood this process. However, Twitter/X's switch to an open, less rigorous verification process represented a significant policy shift. We conduct a U.S. Census-representative survey to investigate how this and subsequent verification changes across social media impact users' verification perceptions. We find most users generally recognize the changes to Twitter/X's policy, though many still believe Twitter/X verifies account holders' true identities. However, users are less aware of subsequent Facebook verification changes. We also find platforms' verification differences do not impact user perceptions of posted content credibility.
Finally, we investigate hypothetical verification policies. We find participants are more likely to perceive posts from verified accounts as credible when only notable accounts are eligible and government document review is required. Payment did not affect credibility decisions, but participants felt strongly that payment for verification was unacceptable.
"Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery
Natalie Grace Brigham, Miranda Wei, and Tadayoshi Kohno, University of Washington; Elissa M. Redmiles, Georgetown University
AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media. We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them, including deepfakes portraying sexual acts. Respondents indicated strong opposition to creating and, even more so, sharing non-consensually created synthetic content, especially if that content depicts a sexual act. However, seeking out such content appeared more acceptable to some respondents. Attitudes around acceptability varied further based on the hypothetical creator’s relationship to the participant, the respondent’s gender and their attitudes towards sexual consent. This study provides initial insight into public perspectives of a growing threat and highlights the need for further research to inform social norms as well as ongoing policy conversations and technical developments in generative AI.
10:30 am–11:00 am
Lightning Talks
11:00 am–11:30 am
Coffee and Tea Break
Salon E Foyer
11:30 am–12:30 pm
Mobile Security
Session Chair: Yaxing Yao, Virginia Tech
What Drives SMiShing Susceptibility? A U.S. Interview Study of How and Why Mobile Phone Users Judge Text Messages to be Real or Fake
Sarah Tabassum, Cori Faklaris, and Heather Richter Lipford, University of North Carolina at Charlotte
In today's digital world, SMS phishing, also known as SMiShing, poses a serious threat to mobile users. However, it is unclear whether existing research on phishing can be applied to SMiShing. Our study aims to fill this gap by conducting interviews with 29 mobile phone users in a major southeastern U.S. city. We collected data on participants' experiences with suspicious SMS, understanding the cues they pay attention to, how they verify and report such messages, and the role of prior training in distinguishing real messages from scams. We also collected data on how specific details and context make a legitimate SMS seem genuine. Our findings indicate that participants focus more on the content, format, and links in SMS rather than the sender's short code, phone number, or email address. We suggest design changes to enhance user awareness and resilience against SMS phishing. This research provides practical knowledge to mitigate cyber threats linked to SMiShing. To the best of our knowledge, this is the first interview study on SMiShing susceptibility.
"I would not install an app with this label": Privacy Label Impact on Risk Perception and Willingness to Install iOS Apps
David G. Balash, University of Richmond; Mir Masood Ali and Chris Kanich, University of Illinois Chicago; Adam J. Aviv, The George Washington University
Starting December 2020, all new and updated iOS apps must display app-based privacy labels. As the first large-scale implementation of privacy nutrition labels in a real-world setting, we aim to understand how these labels affect perceptions of app behavior. Replicating the methodology of Emani-Naeini et al. [IEEE S&P '21] in the space of IoT privacy nutrition labels, we conducted an online study in January 2023 on Prolific with n=1,505 participants to investigate the impact of privacy labels on users' risk perception and willingness to install apps. We found that many privacy label attributes raise participants' risk perception and lower their willingness to install an app. For example, when the app privacy label indicates that financial info will be collected and linked to their identities, participants were 15 times more likely to report increased privacy and security risks associated with the app. Likewise, when a label shows that sensitive info will be collected and used for cross-app/website tracking, participants were 304 times more likely to report a decrease in their willingness to install. However, participants had difficulty understanding privacy label jargon such as diagnostics, identifiers, track and linked. We provide recommendations for enhancing privacy label transparency, the importance of label clarity and accuracy, and how labels can impact consumer choice when suitable alternative apps are available.
"Say I'm in public...I don't want my nudes to pop up." User Threat Models for Using Vault Applications
Chris Geeng, New York University; Natalie Chen, Northeastern University; Kieron Ivy Turk, University of Cambridge; Jevan Hutson, University of Washington School of Law; Damon McCoy, New York University
Vault apps and hidden albums are tools used to encrypt and hide sensitive photos, videos, and other files. While security researchers have analyzed how technically secure they are, there is little research to understand how and why users use vault apps, and whether these tools meet their needs. To understand user threat models for vault apps, we conducted semi-structured interviews (N = 18) with U.S. adult vault app users. We find our participants store intimate media, non-sexual body images, photos of partying and drinking, identification documents, and other sensitive files. Participants primarily used vault apps to prevent accidental content exposure from shoulder surfing or phone sharing, whether in public or with and around close ties. Vault apps were not used to prevent a technically proficient adversary from accessing their files. We find that vault apps prevent context collapse when sharing devices, similar to how privacy settings prevent context collapse on social media. We conclude with recommendations for research aligning with user threat models, and design recommendations for vault apps.
“I do (not) need that Feature!” – Understanding Users’ Awareness and Control of Privacy Permissions on Android Smartphones
Sarah Prange, University of the Bundeswehr Munich; Pascal Knierim, University of Innsbruck; Gabriel Knoll, LMU Munich; Felix Dietz, University of the Bundeswehr Munich; Alexander De Luca, Google Munich; Florian Alt, University of the Bundeswehr Munich
We present the results of the first field study (N = 132 ) investigating users’ (1) awareness of Android privacy permissions granted to installed apps and (2) control behavior over these permissions. Our research is motivated by many smartphone features and apps requiring access to personal data. While Android provides privacy permission management mechanisms to control access to this data, its usage is not yet well understood. To this end, we built and deployed an Android application on participants’ smartphones, acquiring data on actual privacy permission states of installed apps, monitoring permission changes, and assessing reasons for changes using experience sampling. The results of our study show that users often conduct multiple revocations in short time frames, and revocations primarily affect rarely used apps or permissions non-essential for apps’ core functionality. Our findings can inform future (proactive) privacy control mechanisms and help target opportune moments for supporting privacy control.
12:30 pm–2:00 pm
Tuesday Luncheon and Speed Mentoring Tables
2:00 pm–3:00 pm
Security in the Workplace
Session Chair: Daniel Zappala, Brigham Young University
Threat modeling state of practice in Dutch organizations
Stef Verreydt, Koen Yskout, Laurens Sion, and Wouter Joosen, DistriNet, KU Leuven
Threat modeling is a key technique to apply a security by design mindset, allowing the systematic identification of security and privacy threats based on design-level abstractions of a system. Despite threat modeling being a best practice, there are few studies analyzing its application in practice. This paper investigates the state of practice on threat modeling in large Dutch organizations through semi-structured interviews.
Compared to related work, which mainly addresses the execution of threat modeling activities, our findings reveal multiple human and organizational factors which significantly impact the embedding of threat modeling within organizations. First, while threat modeling is appreciated for its ability to uncover threats, it is also recognized as an important activity for raising security awareness among developers. Second, leveraging developers' intrinsic motivation is considered more important than enforcing threat modeling as a compliance requirement. Third, organizations face numerous challenges related to threat modeling, such as managing the scope, obtaining relevant architectural documentation, scaling, and systematically following up on the results. Organizations can use these findings to assess their current threat modeling activities, and help inform decisions to start, extend, or reorient them. Furthermore, threat modeling facilitators and researchers may base future efforts on the challenges identified in this study.
What Motivates and Discourages Employees in Phishing Interventions: An Exploration of Expectancy-Value Theory
Xiaowei Chen, Sophie Doublet, Anastasia Sergeeva, Gabriele Lenzini, and Vincent Koenig, University of Luxembourg; Verena Distler, University of the Bundeswehr Munich
Organizations adopt a combination of measures to defend against phishing attacks that pass through technical filters. However, employees’ engagement with these countermeasures often does not meet security experts’ expectations. To explore what motivates and discourages employees from engaging with user-oriented phishing interventions, we conducted seven focus groups with 34 employees at a European university, applying the Expectancy-Value Theory. Our study revealed a spectrum of factors influencing employees’ engagement. The perceived value of phishing interventions influences employees’ participation. Although the expectation of mitigation and fear of consequences can motivate employees, lack of feedback and communication, worries, and privacy concerns discourage them from reporting phishing emails. We found that the expectancy-value framework provides a unique lens for explaining how organizational culture, social roles, and the influence of colleagues and supervisors foster proactive responses to phishing attacks. We documented a range of improvements proposed by employees to phishing interventions. Our findings underscore the importance of enhancing utility value, prioritizing positive user experiences, and nurturing employees’ motivations to engage them with phishing interventions.
Beyond the Office Walls: Understanding Security and Shadow Security Behaviours in a Remote Work Context
Sarah Alromaih, University of Oxford and King Abdulaziz City for Science and Technology; Ivan Flechais, University of Oxford; George Chalhoub, University of Oxford and University College London
Organisational security research has primarily focused on user security behaviour within workplace boundaries, examining behaviour that complies with security policies and behaviour that does not. Here, researchers identified shadow security behaviour: where security-conscious users apply their own security practices which are not in compliance with official security policy. Driven by the growth in remote work and the increasing diversity of remote working arrangements, our qualitative research study aims to investigate the nature of security behaviours within remote work settings. Using Grounded Theory, we interviewed 20 remote workers to explore security related practices within remote work. Our findings describe a model of personal security and how this interacts with an organisational security model in remote settings. We model how remote workers use an appraisal process to relate the personal and organisational security models, driving their security-related behaviours. Our model explains how different levels of alignment between the personal and organisational models can drive compliance, non-compliance, and shadow security behaviour in remote work settings. We discuss the implications of our findings for remote work security and highlight the importance of maintaining informal security communications for remote workers, homogenising security interactions, and adopting user experience design for remote work solutions.
Who is the IT Department Anyway: An Evaluative Case Study of Shadow IT Mindsets Among Corporate Employees
Jan-Philip van Acken and Floris Jansen, Utrecht University; Slinger Jansen, Utrecht University and LUT University; Katsiaryna Labunets, Utrecht University
This study aimed to explore the factors influencing employees to deploy what can be classified as shadow IT in a corporate context. Shadow IT denotes unofficial, unsanctioned forms of IT. We employed a mixed-methods approach, consisting of a survey and follow-up interviews with employees from a large professional services company. The survey yielded 450 responses, uncovering different types of shadow IT within the company. The follow-up interviews with 32 employees aimed to uncover their perceptions of shadow IT, related risks, and their attitudes towards shadow IT usage. The survey and interviews revealed various types of shadow IT and showed a dichotomy of risk-averse and risk-tolerant mindsets. We found that participants employed a combination of these mindsets. Despite being aware of significant risks, gaps exist in acting upon this awareness, leading to an awareness-action gap. Closing this gap can be facilitated through factors that change these mindsets, such as the consequences of previous shadow IT choices, risk discussions, or training.
3:00 pm–3:30 pm
Coffee and Tea Break
Salon E Foyer
3:30 pm–4:45 pm
Social Aspects of Security
Session Chair: Cori Faklaris, University of North Carolina at Charlotte
Of Mothers and Managers – The Effect of Videos Depicting Gender Stereotypes on Women and Men in the Security and Privacy Field
Nina Gerber and Alina Stöver, Technical University of Darmstadt; Peter Mayer, University of Southern Denmark
Gender imbalances are prevalent in computer science and the security and privacy (S&P) field in particular, giving rise to gender stereotypes. The existence of such stereotypes might elicit the stereotype threat effect well-known from research in math settings: mere exposure to stereotypes can decrease the performance in and attitude towards specific fields. In this work, we investigate whether the stereotype threat effect influences women and men in the S&P field. We conducted an online experiment with multiple groups to explore whether videos that depict and counteract gender stereotypes influence S&P attitudes and intentions (RQ1), and (self-assessed) S&P knowledge (RQ2). We find overall little evidence for the stereotype threat effect, but our results show that women in the condition actively counteracting gender stereotypes report a higher interest in preventing hacker access to their devices than women in the stereotype conditions. In addition, we find that men score higher than women in a variety of self-report measures, except for security and privacy concerns. These results indicate that stereotypes might need to be addressed early on to prevent stereotypes from becoming social norms and a self-fulfilling prophecy of gender imbalance in the S&P field.
Towards Bridging the Research-Practice Gap: Understanding Researcher-Practitioner Interactions and Challenges in Human-Centered Cybersecurity
Julie M. Haney, Clyburn Cunningham IV, and Susanne M. Furman, National Institute of Standards and Technology
Human-centered cybersecurity (HCC) researchers seek to improve people's experiences with cybersecurity. However, a disconnect between researchers and practitioners, the research-practice gap, can prevent the application of research into practice. While this gap has been studied in multiple fields, it is unclear if findings apply to HCC, which may have unique challenges due to the nature of cybersecurity. Additionally, most gap research has focused on research outputs, largely ignoring potential benefits of research-practice engagement throughout the entire research life cycle. To address these gaps, we conducted a survey of 133 HCC researchers. We found that participants most often engage with practitioners during activities at the beginning and end of the research life cycle, even though they may see the importance of engagement throughout. This inconsistency may be attributed to various challenges, including practitioner and researcher constraints and motivations. We provide suggestions on how to facilitate meaningful researcher-practitioner interactions towards ensuring HCC research evidence is relevant, available, and actionable in practice.
Comparing Teacher and Creator Perspectives on the Design of Cybersecurity and Privacy Educational Resources
Joy McLeod, Carleton University; Leah Zhang-Kennedy, University of Waterloo; Elizabeth Stobert, Carleton University
Various educational resources have been developed to teach children about cybersecurity and privacy. Our qualitative interview study with 15 middle school teachers and 8 creators of cybersecurity educational resources compares and analyzes the design considerations of cybersecurity resource creators with the resource selection strategies and classroom practices of teachers in their delivery of cybersecurity lessons to middle school students. Our thematic analysis showed that teachers predominately used free, low-tech, modular, and modifiable resources such as lesson plans, short educational videos, and segmented learning modules to fit their classroom teaching needs. The topics focus on helping students develop critical thinking skills rather than technical knowledge. Creators, on the other hand, focused their resource design considerations primarily on cybersecurity trends and students' media learning preferences, such as developing games and other types of interactive content to increase engagement. We highlight areas of misalignment between creators' design considerations compared to how teachers access and deliver cybersecurity and privacy lessons to students.
Negative Effects of Social Triggers on User Security and Privacy Behaviors
Lachlan Moore, Waseda University and NICT; Tatsuya Mori, Waseda University, NICT, and RIKEN AIP; Ayako A. Hasegawa, NICT
People make decisions while being influenced by those around them. Previous studies have shown that users often adopt security practices on the basis of advice from others and have proposed collaborative and community-based approaches to enhance user security behaviors. In this paper, we focused on the negative effects of social triggers and investigated whether risky user behaviors are socially triggered. We conducted an online survey to understand the triggers for risky user behaviors and the practices of sharing the behaviors. We found that a non-negligible percentage of participants experienced social triggers before engaging in risky behaviors. We also show that socially triggered risky behaviors are more likely to be socially shared, i.e., there are negative chains of risky behaviors. Our findings suggest that more efforts are needed to reduce negative social effects, and we propose specific approaches to accomplish this.
Beyond Fear and Frustration - Towards a Holistic Understanding of Emotions in Cybersecurity
Alexandra von Preuschen and Monika C. Schuhmacher, Justus-Liebig-University Gießen; Verena Zimmermann, ETH Zurich
Awarded Distinguished Paper!
Employees play a pivotal role for organizational cybersecurity, making understanding the human factor in the context of cybersecurity a critical necessity. While much is known about cognitive factors, less is known about the role of emotions. Through a qualitative survey (N = 112) and in-depth interviews (N = 26), we holistically investigate the causes, types and consequences of emotions in the context of cybersecurity. We demonstrate the existence of diverse, even conflicting emotions at the same time and classify these emotions based on the circumplex model of affect. Furthermore, our findings reveal that essential causes for cybersecurity-related emotions include individual, interpersonal and organizational factors. We also discover various cybersecurity-relevant consequences across behavioral, cognitive and social dimensions. Based on our findings, we provide a framework that unravels the complexity, impact and spill-over effects of cybersecurity-related emotions. Finally, we provide recommendations for promoting secure behavior with a human-centered lens, mitigating negative tendencies, and safeguarding users from unfavorable spill-over effects.
4:45 pm–5:00 pm
Closing Remarks and Poster Awards
General Chairs: Patrick Gage Kelley, Google, and Apu Kapadia, Indiana University Bloomington