SOUPS 2024 Poster Session

The following posters will be presented during the Poster Session and Reception on Monday, August 12, from 5:15 pm–6:30 pm. Posters and their abstracts are available for download below to registered attendees now and to everyone beginning Monday, August 12. Copyright to the individual works is retained by the author[s].

Unpublished Work

Posters of unpublished research.

Smart Tools, Smarter Concerns: Navigating Privacy Perceptions in Academic Settings

Yimeng Ma, Weihan Xu, Hongyi Yin, Yuxuan Zhang, and Pardis Emami-Naeini, Duke University

Available Media

This study examines the utilization of IoT-enabled and AI-enabled smart tools enabled in academic settings and evaluates the privacy concerns of students, faculty, and staff towards these technologies. Through a comprehensive survey with 22 college students, faculties, and staff memebers, the research identifies significant usage patterns and preferences, revealing that learning management systems are most commonly used, followed by online assessment platforms and personalized learning apps. The study highlights a general comfort with privacy measures, though notable differences exist between faculty and students, with faculty expressing greater concerns, especially with the integration of AI technologies. By comparing attitudes in scenarios with and without AI integration, the findings suggest that while all users value robust privacy protections, specific concerns vary significantly between different academic roles. The study's insights into privacy perceptions and technological acceptance contribute to the discourse on digital ethics and are essential for developing policies that balance technological benefits with privacy protections.

Can You See It? -NOP! A Practitioners Study

Diego Soi, Leonardo Regano, Davide Maiorca, and Giorgio Giacinto, University of Cagliary - Dept. of Electrical and Electronical Engineering; Harel Berger, Georgetown University

Available Media

This study delves into how human intuition detects evasion attacks. Through a suggested online survey with industry and academic practitioners, we will analyze the detection of evasive malware samples by humans, from simple to complex tactics. We wish to emphasize the need for improved malware detection and training for future cybersecurity experts, to enhance malware detection in human-computer defense systems.

The Onion Unpeeled: User Perceptions vs. Realities of Tor's Security and Privacy Properties

Harel Berger and Tianjian Hu, Georgetown University; Adam J. Aviv, The George Washington University; Micah Sherr, Georgetown University

Available Media

In the face of growing concerns about online privacy and security, this research delves into the perceptions versus the realities of the Tor network's privacy and security features, as experienced by its users. Through an online survey hosted on an onion site (making it accessible only to Tor users), we seek to uncover the nuances of how users engage with Tor, their understanding of its privacy protections, and their general awareness of online security principles. This exploration not only aims to bridge the knowledge gap regarding user expectations and the technical functionalities of Tor but also to shed light on potential areas for improvement in user education regarding Tor. The insights gained from this study are intended to contribute to the enhancement of Tor's utility as a tool for privacy and security, fostering a more informed and empowered user community.

Mapping Cybersecurity Practices and Mental Models in Danish SMEs: A Comprehensive Study using Focus Groups

Judith Kankam-Boateng, Marco Peressotti, and Peter Mayer, University of Southern Denmark

Available Media

In response to high cyber threats, this paper evaluates the cybersecurity landscape in Danish SMEs within the defense and IT sectors, mapping perspectives from policymakers, business associations, and SMEs. By integrating quantitative surveys, qualitative focus groups, in-depth interviews, and game-based simulations, we aim to understand and address the vulnerabilities in these critical sectors. The objective is to evaluate both national and international cybersecurity policies and practices, proposing actionable strategies to Danish SMEs making them resilient and contributing significantly to national security. The findings from this study are intended to contributing to the discourse on usable security practices and their implications for business continuity and policy formulation within the SMEs context.

A First Look into the Profile Lock Feature on Facebook

Mashiyat Mahjabin Eshita, Ishmam Bin Rofi, Shammi Akhter Shiba, and S M Taiabul Haque, Brac University

Available Media

The profile lock feature on Facebook, which provides the users with the opportunity to restrict their contents on the platform in an efficient way, is a region-specific, optional feature that is available only in a few countries, mostly in the Global South. In this work, we conducted semi-structured interviews with 21 users from Bangladesh to understand their motivations, opinions, and practices regarding activating the profile lock feature on Facebook. We found that this feature gives our participants an inflated sense of security and protection, and their adoption decision is entangled with religious and cultural arguments. While prior negative experiences motivate them to lock their profile, they also utilize this feature to develop a nuanced mechanism for managing multiple social media platforms. This work generates novel insights regarding the privacy protection mechanisms of an understudied population in the Global South.

Developing Textual Descriptions of PETs for Ad Tracking and Analytics

Lu Xian, Song Mi Lee-Kan, Jane Im, and Florian Schaub, University of Michigan

Available Media

Describing Privacy Enhancing Technologies (PETs) to the general public is challenging but essential to convey the privacy protections they provide. Existing research has explored the explanation of differential privacy in health contexts. Our study adapts well-performing textual descriptions of local differential privacy from prior work to a new context and broadens the investigation to the descriptions of additional PETs. Specifically, we develop user-centric textual descriptions for popular PETs in ad tracking and analytics, including local differential privacy, federated learning with and without local differential privacy, and Google's Topics. We examine the applicability of previous findings to these expanded contexts, and evaluate the PET descriptions with quantitative and qualitative survey data (n=306). We find that adapting the process- and implications-focused approach to the ad tracking and analytics context achieved similar effects in facilitating user understanding compared to health contexts, and that our descriptions developed with this process+implications approach for the additional, understudied PETs help users understand PETs' processes. We also find that incorporating an implications statement into PET descriptions did not hurt user comprehension but also did not achieve a significant positive effect, which contrasts prior findings in health contexts. We note that the use of technical terms and as well as the machine learning aspect of PETs, even without delving into specifics, led to confusion for some respondents. Based on our findings, we offer recommendations and insights for crafting effective user-centric descriptions of PETs.

Understanding De-identification Guidance and Practices for Research Data

Wentao Guo and Aditya Kishore, University of Maryland; Paige Pepitone, NORC at the University of Chicago; Adam Aviv, The George Washington University; Michelle Mazurek, University of Maryland

Available Media

Publishing de-identified research data is beneficial for transparency and the advancement of knowledge, but it creates the risk that research subjects could be re-identified, exposing private information. De-identifying data is difficult, with evolving techniques and mixed incentives. We conducted a thematic analysis of 38 recent online de-identification guides, characterizing the content of these guides and identifying concerning patterns, including inconsistent definitions of key terms, gaps in coverage of threats, and areas for improvement in usability. We also interviewed 26 researchers with experience de-identifying and reviewing data for publication, analyzing how and why most of these researchers may fall short of protecting against state-of-the-art re-identification attacks.

From Immersion to Manipulation: Exploring the Prevalence of Dark Patterns in Mixed Reality

Angela Todhri and Pascal Knierim, University of Innsbruck

Available Media

The continuing advances in Mixed Reality (MR) technology have finally brought MR experiences to the consumers. However, the growing number of experiences merging the physical and virtual worlds has also elicited a rise in the use of Dark Patterns and manipulative design tactics intended to deceive users into actions they might not otherwise take. This preliminary research investigates the mechanisms and prevalence of Dark Patterns in MR environments, providing a first glimpse into manipulative practices. Analyzing 80 MR applications across various MR platforms, we identified five primary Dark Patterns: Hidden Costs, Misinformation, Button Camouflage, Forced Continuity, and Disguised Ads. Our ongoing analysis highlights the impact of these patterns on user trust and decision-making.

Privacy Threat Modeling for Everyone: MITRE PANOPTIC

Samantha Katcher, Tufts University, MITRE; Stuart Shapiro, Ben Ballard, Katie Isaacson, Julie McEwen, and Shelby Slotter, MITRE

Available Media

Threat modeling is a process which can be used to understand potential attacks or adversaries and is essential for holistic risk modeling. As privacy moves from a compliance to a risk-based orientation, threat-informed defense will be crucial for organizations' privacy management as it has already become for their cybersecurity management. Yet, privacy lacks a shared threat language and commonly used threat model. This paper describes one effort to address this gap, the development of the Pattern and Action Nomenclature Of Privacy Threats In Context (PANOPTIC\texttrademark). The model’s scope is broader than a cybersecurity threat model by necessity, including both actions and inactions, benign as well as malicious intent, and recognizes the system of concern as a potential threat agent in addition to adversaries outside the system itself. This paper defines a privacy attack – the foundation of the PANOPTIC Privacy Threat Model – and describes the model itself; how it was developed; use cases for the model, such as privacy threat assessments, privacy risk modeling, and privacy red teaming; and future work expanding and enhancing the model.

Privacy and Utility Analysis of the Topics API for the Web

Yohan Beugin and Patrick McDaniel, University of Wisconsin-Madison

Available Media

Today, targeted online advertising relies on unique identifiers assigned to users through third-party cookies--a practice at odds with user privacy. While the web and advertising communities have proposed solutions that we refer to as interest-disclosing mechanisms, including Google's Topics API, an independent analysis of these proposals in realistic scenarios has yet to be performed. In this paper, we attempt to validate the privacy (i.e., preventing unique identification) and utility (i.e., enabling ad targeting) claims of Google's Topics proposal in the context of realistic user behavior. Through new statistical models of the distribution of user behaviors and resulting targeting topics, we analyze the capabilities of malicious advertisers observing users over time and colluding with other third parties. Our analysis shows that even in the best case, individual users' identification across sites is possible, as 0.4% of the 250k users we simulate are re-identified. These guarantees weaken further over time and when advertisers collude: 57% of users with stable interests are uniquely re-identified when their browsing activity has been observed for 15 epochs, increasing to 75% after 30 epochs. While measuring that the Topics API provides moderate utility, we also find that advertisers and publishers can abuse the Topics API to potentially assign unique identifiers to users, defeating the desired privacy guarantees. As a result, the inherent diversity of users' interests on the web is directly at odds with the privacy objectives of interest-disclosing mechanisms; we discuss how any replacement of third-party cookies may have to seek other avenues to achieve privacy for the web.

[1] Interest-disclosing Mechanisms for Advertising are Privacy-Exposing (not Preserving) - Yohan Beugin, Patrick McDaniel - Proceedings of the Privacy Enhancing Technologies Symposium (PETS), 2024 - https://doi.org/10.56553/popets-2024-0004

[2] A Public and Reproducible Assessment of the Topics API on Real Data - Yohan Beugin, Patrick McDaniel - Proceedings of the IEEE Security and Privacy Workshops (SPW - SecWeb), 2024 - https://doi.org/10.48550/arXiv.2403.19577

Design and Evaluation of the UsersFirst Privacy Notice and Choice Threat Analysis Taxonomy

Xinran Alexandra Li, Yu-Ju Yang, Yash Maurya, Tian Wang, Hana Habib, Norman Sadeh, and Lorrie Faith Cranor, Carnegie Mellon University

Available Media

UsersFirst is a privacy threat modeling framework under development at Carnegie Mellon University (CMU) designed to help identify and mitigate user-oriented privacy threats associated with N & C interfaces. In this poster, we report on the first user study designed to evaluate the usefulness of an initial version of the privacy N & C threat taxonomy that is part of UsersFirst and evaluated its efficacy as compared to an existing taxonomy (LINDDUN PRO’s unawareness threat category). This initial version of the UsersFirst Taxonomy is organized around three major categories of threats (delivery, language & content, and presentation & design) and comprises a total of 27 different threat types. We selected privacy N & C interfaces from a well-known e-commerce platform and conducted semi-structured in-person interview sessions with 14 participants who had prior privacy experience. We found that privacy practitioners who use the UsersFirst Taxonomy were able to identify more user-oriented threats associated with privacy N & C than they identify without the use of a taxonomy, and that all UsersFirst Taxonomy users identify more threats than LINDDUN PRO users. Participants assigned to use the UsersFirst Taxonomy also commented it was easy to use and helped them better organize their thoughts during threat identification.

From Laughter to Concern: Exploring Conversations about Deepfakes on Reddit -Trends and Sentiments

Harshitha Benakanahalli Nagaraj and Rahul Gowda Kengeri Kiran, Rochester Institute of Technology

Available Media

The escalating prevalence of deepfake content on online platforms raises concerns about its potential threats to individual privacy, national security, and democracy. This phenomenon is closely tied to rapid advancements in deep learning technologies, enabling highly realistic manipulation and generation of synthetic content. With rapidly evolving tools for deepfake creation and changing public perceptions, there is a pressing need to keep pace with these developments to enable social media platforms, law enforcement agencies, and researchers to develop better deepfake detection capabilities. To contribute to this evolving field, we compiled and analyzed a dataset of deepfake-related SFW discussions on Reddit. Our systematic analysis revealed several key findings: the emergence of new creation tools each year, spikes in negative sentiment towards deepfakes following high-profile misuse incidents, and a diverse range of discussions including topics such as deepfake creation involving famous personalities, challenges in regulation, detection techniques, and instances of deepfake-related scams. These insights provide valuable information for understanding the evolving landscape of deepfake technology and its societal impact, potentially informing future strategies for detection, regulation, and public awareness campaigns.

WiP: A Qualitative Study of Service-Learning Oriented Cybersecurity Clinics' Processes and Challenges

Matthew Ung, Carnegie Mellon University; Andrew Lin and Daniel Votipka, Tufts University

Available Media

Cybersecurity clinics, inspired by long-standing university clinics in fields like law and medicine, provide essential cybersecurity services to at-risk groups such as municipal governments and LGBTQ+ support organizations. These clinics aim to bridge this gap in cybersecurity expertise, which is often inaccessible to these groups. However, organizing and running these clinics can be challenging both technically and socially, as clinics have to balance student access to provide services with clinic privacy and autonomy, while also ensuring students are given practical learning experiences and clients are served in a way that respectful and empowering.

In this paper we present the results of initial interviews with clinic stakeholder groups (i.e., clinic leadership, clients, and student clinicians) from a single institution, the Tufts University Cybersecurity Clinic. In these interviews, we investigate each clinic experience and perceptions, as well as their relationships with other stakeholder groups. We apply a critical service learning lens when developing the interview and analyzing results. We found that while clients recognize their need for cybersecurity, logistical challenges, rather than technical ones, often limit the clinics' effectiveness. We also found that the concepts of critical service learning, though emphasized in the clinic's mission, are not always fully instilled in students in practice. This study's findings aim to inform the development of future cybersecurity clinics, ensuring they address both immediate cybersecurity needs and broader social justice issues.

Account Password Sharing in Ordinary Situations and Emergencies: A Comparison Between Young and Older Adults

Lirong Yuan, Yanxin Chen, Jenny Tang, and Lorrie Faith Cranor, Carnegie Mellon University

Available Media

Sharing account passwords with others is a prevalent yet risky practice. We explore password-sharing behaviors in ordinary situations and hypothetical emergencies, perceived security risks, and user interest in password manager features that could facilitate secure sharing. We surveyed (n=208) young adults (18-24) and older adults (65+) to see how their sharing habits differed. Our findings suggest that younger adults are more likely to share passwords in ordinary situations, but older adults are more likely to share in emergencies. Both groups expressed security concerns in ordinary situations, but less so in emergencies. The majority of people (>50\%) were interested in a password manager feature that facilitates secure account sharing, but both young and older adults were reluctant to pay a premium for it.

An Investigation of Online Developer Discussions About Generative AI Programming Assistants

Cordelia Ludden, Cordell Burton Jr, and Daniel Votipka, Tufts University

Available Media

The use of AI assistants in various industries has increased in recent years with the development of tools such as ChatGPT and CoPilot. According to a 2023 Stack Overflow Developer Survey, approximately 70% of professional developers are using or planning to use AI tools within their development processes, highlighting the widespread adoption of these technologies in coding. While this productivity is good, it has been demonstrated that these LLMs often generate insecure code. We aimed to look at how developers view these security issues and their practices in using AI assistants in coding.

To do this we reviewed posts and comments on relevant computer science subreddits and qualitatively coded the results. Overall the relevant discussion fell into two larger categories: about using AI to write code and opinions on using AI assistants for coding related tasks. The most common response we found was that participants used, or wanted to use, these assistants to write code for their projects. While there were not many posts or comments related to the security of code there was a large volume of responses that mentioned AI assistance often generating bad code in response to people using these assistants to write code. We believe the widespread adoption of AI by developers emphasizes its role as an assistant rather than a primary tool. In addition individuals' skepticism towards AI-generated code potentially produces a benefit, when compared to other developer support services, such as Stack Overflow, which prior work has shown developers often are not skeptical of and simply copy/paste insecure code.

Memorial of Fallen Soldiers vs. Post-Mortem Privacy: Perception Study

Harel Berger, Georgetown University

Available Media

This research explores the tension between preserving the privacy of fallen soldiers and the need for their families and State authorities to access digital legacies for memorialization and truth seeking. Through novel experimental approach of perception surveys of soldiers, relatives of fallen soldiers, and State memorial officials, the research aims to understand privacy preferences, access challenges, and the implications of digital legacies for memorial practices. This study contributes to the discourse on digital ethics in armed conflict, stressing the need for a balanced approach to handling soldiers’ digital legacies.

Will our Colleagues Detect the AirTag? Let's Check (Consensually).

Dañiel Gerhardt, Matthias Fassl, and Katharina Krombholz, CISPA Helmholtz Center for Information Security

Available Media

Since their release, AirTags have been misused for stalking and other malicious purposes. Their small size, affordability, availability, and precise tracking functionality facilitate the invasion of peoples' privacy. To combat misuse, Apple implemented multiple anti-stalking features that inform potential victims and help them find and disable the location tracker. One of the primary anti-stalking features is unwanted tracking alerts. These smartphone notifications alert users that they have been followed by an AirTag or other Find-My device for some time.

This preliminary work evaluates the reliability of unwanted tracking alerts across platforms. We performed an experiment with N=50 employees at our institution. We found that tracking alerts are very reliable on iOS devices and not as reliable on Android devices, indicating a different implementation of unwanted tracking alerts or hardware limitations.

Poster: Future Work Statements at SOUPS

Jacques Suray, Leibniz University Hannover; Jan H. Klemmer, Juliane Schmüser, and Sascha Fahl, CISPA Helmholtz Center for Information Security

Available Media

Extending knowledge by identifying and investigating valuable research questions and problems is a core function of research. Research publications often suggest avenues for future work to extend their results, and usable privacy and security (UPS) researchers commonly add future work statements (FWS) to their publications. We define FWS as a passage in a research article that suggests future work ideas that the research community could address. Considering these suggestions can help with developing research ideas that efficiently utilize prior research resources and produce results that tie into existing knowledge. However, our community lacks an in-depth understanding of FWS’ prevalence, quality, and impact on future research in the UPS field. Our work aims to address this gap by reviewing all 27 papers from the 2019 Symposium on Usable Privacy and Security (SOUPS) proceedings and analyzing their FWS. Additionally, we analyzed 978 publications that cite any paper from SOUPS 2019 proceedings to assess their FWS’ impact. We answer the following research questions:

RQ1: How do SOUPS research articles include future work statements?

RQ2: To what extent do researchers address future work statements from SOUPS research articles?

We find that most papers include FWS, which are often unspecific or ambiguous. Therefore, the citing publications often matched the future work statements’ content thematically, but rarely explicitly acknowledged them, indicating a limited impact. We conclude with recommendations for the usable privacy and security community to improve the utility of FWS by making them more tangible and actionable, and avenues for future work.

How to Explain Trusted Execution Environments (TEEs)?

Carolina Carreira, McKenna McCall, and Lorrie Faith Cranor, Carnegie Mellon University

Available Media

Trusted Execution Environments (TEEs) are isolated environments for executing code that guarantee the authenticity of the executed code, the integrity of the runtime states, and the confidentiality of its code and data. Previous work investigates how the presence of TEEs affects privacy norms for smart home technology, especially when people understand what a TEE is. While TEEs can fill an important gap in system security, without clear and accessible explanations of TEEs and what guarantees they offer, they may do little to address users' perception of safety.

In this work-in-progress study, we investigate potential TEE explanations to enhance both understanding of the capabilities that a TEE does (and does not) have and trust in TEE-enhanced technologies in the context of specific scenarios.

Just Make it Invisible to the User? A Case Study of Invisible Flash Drive Encryption

Jens Christian Opdenbusch, Konstantin Fischer, Jan Magnus Nold, and M. Angela Sasse, Ruhr University Bochum

Available Media

USB Flash drives are used in high-security contexts when networks are strongly separated. We conduct a task observation and interview study (n=14) to investigate problems users might face when a company deploys flash drive encryption software that is almost completely invisible to the user. We find a strong disparity between participants' knowledge of the flash drive encryption during the interviews and the observation of them interacting with it.

Case Study: Exploring Employees’ Security Friction & Loss of Productivity

Jonas Hielscher, Ruhr University Bochum; Jennifer Friedauer, Ruhr-University Bochum; Markus Schöps and Angela Sasse, Ruhr University Bochum

Available Media

In organizations, poorly designed security policies and non-usable security mechanisms can cause security friction that leads to a drop in productivity among employees. This friction can occur through different mechanisms, such as losing concentration, frustration, stress, or unwillingness to innovate. We conducted an online survey case study with n=182 employees at a German automotive supplier's site to understand how employees perceive friction. Here, we provide the survey and derive learnings, e.g., that employees can spot different forms of friction and that open-ended questions are suitable to uncover root causes for it.

Who’s Listening? Analyzing Privacy Preferences in Multi-User Smart Personal Assistants Settings

Carolina Carreira, Carnegie Mellon University, IST University of Lisbon, INESC-ID; Cody Berger, Khushi Shah, Samridhi Agarwal, Yashasvi Thakur, McKenna McCall, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University

Available Media

Smart personal assistants (SPAs) are voice-activated devices that help users with daily tasks such as setting alarms and controlling other smart devices. SPAs' voice recognition capabilities allow them to respond differently to different users. Despite this, privacy controls for these devices are typically coarse-grained, offering little flexibility for individualized preferences. This means devices shared by several users may not be able to meet all of their privacy needs simultaneously. To understand whether privacy settings available for today's SPAs meet different user groups' privacy preferences, we conducted a 90-participant survey and an expert evaluation of the privacy settings for two popular SPAs. Primary and secondary users of SPAs seem to have higher privacy preference acceptability for closer relationships like partners and children accessing data like voice recordings and activity history, compared to more distant actors like neighbors or advertising agencies. However, even within closer circles, users prefer retaining control over changing certain privacy settings rather than fully delegating that control. Our qualitative results reinforce these findings, with common concerns around unauthorized information sharing, audio monitoring, and data breaches from outside agents. Our results highlight the need for flexible, granular privacy controls to adapt to users' diverse preferences across different relationships and contexts.

Digital Fitness for Citizens: Design and Acceptance of a Smartphone Based Behaviour Change Support System for Personal Cyber Security

Jan Magnus Nold, Jens Opdenbusch, and Angela Sasse, Ruhr University Bochum

Available Media

Cyber security is an increasingly important topic for private citizens. While organisations could provide a supportive environment for secure behaviour, private citizens lack this support. One promising and scaleable method to change behaviour is persuasive technology.

This poster presents a Master's thesis investigating whether smartphone-based behaviour change can be applied to personal cyber security. Therefore, a persuasive app interface was designed after analysing the context of usage. This interface was presented to (n=73) participants via an online survey to assess persuasiveness factors and usage intention.

Participants perceived the app as persuasive and effective in supporting them with security related behaviour, which in turn increased intention to use. Moreover, social comparison led to social pressure and a decrease of intention to use. Persuasive technology seems to be a promising direction for behaviour change in cyber security.

Exposing Local Sources: The (Non)Use of Secure Tip Communication Methods by Local US News Organizations

Christine Lam, Barnard College; Murat Gulcelik, Columbia University; Olga Rios, Barnard College; Martin Shelton, Freedom of the Press Foundation; Jennifer R. Henrichsen, Washington State University; Susan E. McGregor, Columbia University

Available Media

Research shows that high-quality local journalism is essential to good governance, helping moderate party-line politics and corruption. Yet our analysis of more than 300 local, national and online news websites shows that local news outlets lack secure methods for the public to contact and communicate with them with newsworthy information. We discuss the implications of these findings and directions for future research.

Nudging Adoption: Creating Awareness in Antivirus Software

Jacqueline White and Heather Lipford, UNC Charlotte

Available Media

Users often lack awareness of potential security risks on their smartphones and the protective security mechanisms available for securing their devices and information. One way of raising awareness in users is through the use of notifications and nudges. To that end, we designed two notifications for antivirus software, encouraging users to install antivirus software on another platform, namely a smartphone. We first developed the designs through feedback from a semi-structured interview conducted with 12 participants, then further evaluated the designs through a user study with 36 participants. Our preliminary results indicate that notifications on one device, such as a laptop, may be effective in raising awareness of security tools on other device platforms. Our results also highlight the motivators influencing adoption of antivirus software on another device platform and design guidelines for user attention to and implementation of notifications suggesting security behaviors.

A Case Study on Legal Evidence of Technology-Facilitated Abuse in Wisconsin

Sophie Stephenson and Naman Gupta, University of Wisconsin-Madison; Akhil Polamarasetty, University College London; Kyle Huang, David Youssef, and Rose Ceccio, University of Wisconsin-Madison; Kayleigh Cowan, Disability Rights Wisconsin; Maximilian Zinkus, Johns Hopkins University; Rahul Chatterjee, University of Wisconsin-Madison

Available Media

Abusers use technology to spy on and harass their targets. This pattern is known as technology-facilitated abuse (TFA). Victim-survivors of TFA may turn to the legal system to protect themselves, and to do so, they need evidence of TFA. However, prior work indicates challenges to collecting evidence of TFA or using it in legal proceedings. We performed a qualitative case study of legal evidence of TFA in Wisconsin. Through interviews and focus groups with 19 legal professionals, we surface current practices for evidence of TFA in Wisconsin and elucidate several challenges to preparing and presenting evidence of TFA.

Mobile Apps vs. Web Browsers: A User Perception Study with Android Apps and Google Chrome

Harel Berger, Georgetown University

Available Media

This study examines user perceptions of mobile applications (apps) versus web browsers for accessing online services, with an emphasis on security, privacy, and usability aspects. Through a combination of an experiment and a survey with Android smartphone users, the research seeks to identify the key concerns and preferences that influence their choice between mobile apps and web browsers. The findings will offer valuable insights for developers to improve the security, privacy and usability of both platforms by addressing user concerns and misconceptions.

Vulnerability Perceptions and Practices in Software Development Teams

Arpita Ghosh, Lipsarani Sahoo, and Heather Richter Lipford, University of North Carolina at Charlotte

Available Media

In today’s software development landscape, ensuring robust security practices is crucial due to the high risk of security incidents resulting from software vulnerabilities. Researchers and industry practitioners have recommended a greater organizational focus on security, regular security testing, and other vulnerability mitigation practices. Many organizations now have a robust secure software development life cycle. We seek to extend prior research by examining the practices and perceptions of teams that are in organizations with standard vulnerability management practices. We seek to identify common perceptions, practices, and challenges of teams where security is already considered an important component of software development, as well as where and how teams vary in their practices. Our results will provide evidence of where teams are still struggling with vulnerability prevention and mitigation to provide recommendations to further reduce security risks.

UsersFirst: A User-Centric Privacy Threat Modeling Framework for Notice and Choice

Tian Wang, Xinran Alexandra Li, Miguel Rivera-Lanas, Yash Maurya, Hana Habib, Lorrie Faith Cranor, and Norman Sadeh, Carnegie Mellon University

Available Media

In today’s data-driven economy, the rapid adoption of AI amplifies our dependence on personal data across complex dataflows. In response, emerging data privacy regulations demand usable privacy notice and choice mechanisms, in addition to more stringent data collection and usage practices. Organizations seek guidance to systematically identify and mitigate privacy risks, as penalties for non-compliance have intensified. Privacy threat modeling frameworks like LINDDUN, NIST Privacy Framework, and MITRE’s PANOPTIC framework offer structured methodologies for analyzing and addressing privacy risks, but these frameworks only provide limited guidance on effective privacy notices and choices. This poster introduces UsersFirst, a user-centric framework designed to supplement existing frameworks by helping organizations enhance their privacy notices and choices. UsersFirst emphasizes the need for notices and choices to be noticeable, usable, unambiguous, and free from deceptive designs, reflecting emerging trends in privacy regulations. This framework provides a systematic methodology for identifying and mitigating potential threats, enabling organizations to determine their acceptable risk thresholds and objectives.

Protecting PageRank: Helping Search Engines Maintain Result Integrity

Cordelia Ludden and Helena Simson, Tufts University; Sarah Radway, Harvard University; Daniel Votipka, Tufts University

Available Media

The emergence of third-party services offering to manipulate Google PageRank results are of grave concern. In this work, we take steps to investigate this threat by observing changes in search rankings over time, to identify possible PageRank manipulation. We track changes in the order of results returned for these queries, and use both analysis methods (1) from cybersecurity-based anomaly detection and (2) based in sentiment analysis to determine features organizations can use to identify artificial PageRank manipulation. In this way, search engines can ensure they employ this knowledge to potentially protect against bad actors. As this is a work in progress, we present initial findings here, and discuss future steps forward for this work.

User Awareness and Perspectives Survey on Privacy, Security and Usability of Auditory Prostheses

Sohini Saha and Leslie M. Collins, Duke University; Sherri L. Smith, Duke University Medical Center; Boyla O. Mainsah, Duke University

Available Media

According to the World Health Organization, over 466 million people worldwide suffer from disabling hearing loss, with approximately 34 million of these being children. Hearing aids (HA) and cochlear implants (CI) have become indispensable tools for restoring hearing and enhancing the quality of life for individuals with hearing impairments. Clinical research and consumer studies indicate that users of HAs and CIs report significant improvements in their daily lives, including enhanced communication abilities and social engagement and reduced psychological stress. Modern auditory prosthetic devices are more advanced and interconnected with digital networks to add functionality, such as streaming audio directly from smartphones and other devices, remote adjustments by audiologists, integration with smart home systems, and access to artificial intelligence driven sound enhancement features. With this interconnectivity, issues surrounding data privacy and security have become increasingly pertinent. There is limited research on the usability perceptions of current HA and CI models from the perspective of end-users. In addition, no studies have investigated consumer mental models during the purchasing process, particularly which factors they prioritize when selecting a device. We developed a survey on the Research Electronic Data Capture (REDCap) platform. We assessed participants’ satisfaction levels with various features of their auditory prosthesis. 44% of participants reported complete satisfaction with performance, while 48% expressed complete dissatisfaction with flexibility. Comparable to satisfaction levels, 48% of participants considered performance to be the most important factor when making a purchase decision. Reliability (52%), durability (48%) and usability (22%) were the next highly ranked factors. Interestingly, price (30%) and recommendations from healthcare professionals (22%) were ranked the least important factors by most participants, while privacy, security, and customer support garnered mostly neutral sentiments. Most participants (23 out of 27) were found to be uninformed about privacy and security practices (such as password usage and data privacy) associated with the devices. When queried on strategies that could be adopted to enhance user awareness and education on privacy and security issues related to their devices, the most common responses were receiving regular email updates from manufacturers (18/27) and enhanced data security features in their devices (16/27).

Helping Autistic Young Adults Fight Privacy Violations: Designing a Gamified App

Jason Changxi Xing and Kirsten Chapman, Brigham Young University; Haley Page, Brigham Young University - Idaho; Xinru Page, Brigham Young University

Available Media

Autistic social media users experience more privacy violations and resulting harms than the general population. Prior work suggests increasing digital literacy can help protect against such harms. We investigate the design of a self-paced mobile app that can be widely accessible to autistic social media users. In order to motivate users to actually learn these educational materials, we explore how to gamify the app. We conducted a participatory design session with 3 autistic adults to examine preferred gamification style and deployed a survey with 6 autistic adults to investigate the preferred user interface aesthetics, which is especially important to consider for this population. We found that participants preferred customizable games which didn't cause cognitive overload. Aesthetically, participants preferred realistic and video game artistic styles.

An LLM-driven Approach to Gain Cybercrime Insights with Evidence Networks

Honghe Zhou, Towson University; Weifeng Xu, University of Baltimore; Josh Dehlinger, Suranjan Chakraborty, and Lin Deng, Towson University

Available Media

We have developed an automated approach for gaining criminal insights with digital evidence networks. This thrust will harness Large Language Models (LLMs) to learn patterns and relationships within forensic artifacts, automatically constructing Forensic Intelligence Graphs (FIGs). These FIGs will graphically represent evidence entities and their interrelations as extracted from mobile devices, while also providing an intelligence-driven approach to the analysis of forensic data. Our preliminary empirical study indicates that the LLM-reconstructed FIG can reveal all suspects' scenarios, achieving 91.67% coverage of evidence entities and 93.75% coverage of evidence relationships for a given Android device.

Where are Marginalized Communities in Cybersecurity Research?

Anadi Chattopadhyay, Rodrigo Carvajal, Vasanta Chaganti, and Sukrit Venkatagiri, Swarthmore College

Available Media

Marginalized communities are disproportionately vulnerable to cybersecurity threats, but are rarely the focus of inquiry in cybersecurity research. In this paper, we systematically analyzed recent security, privacy, and cybersecurity publications to understand the frequency and nature of engagement with marginalized communities by reviewing papers across four different professional societies’ venues (ACM, IEEE, USENIX, and PoPETs) published in in the last two years. Of 2,170 papers, we find that only 0.2% (27) of papers engage with marginalization in any form, with the majority of papers (22) being observational studies, and only five that included an intervention to actively support a marginalized community. We discuss how cybersecurity research can make strides towards not only understanding but also actively supporting marginalized groups.

“Learning Too Much About Me”: A User Study on the Security and Privacy of Generative AI Chatbots

Pradyumna Shome and Miuyin Marie Yong Wong, Georgia Institute of Technology

Available Media

Generative AI has burgeoned in the past few years, leading to highly interactive and human-like chatbots. Trained on billions of parameters and a vast corpus spanning gigabytes of the public Internet, tools like ChatGPT, Copilot, and Bard (which we refer to as generative AI chatbots) write code, draft emails, provide mental health counseling, teach us about the world, and act as mentors to help people advance their careers.

On the other hand, many communities have expressed reservations about widespread usage of such technology. Artists and writers are concerned about loss of their intellectual property rights and the potential for their work to be plagiarized. Educators are concerned about the potential for students to cheat on assignments, for bias in automated grading platforms, and for the chatbots to provide incorrect information. Medical professionals are concerned about the potential for chatbots to misdiagnose patients, and for patients to rely on inappropriate advice [9]. There is fear of the unknown, justified concerns about the potential for misuse, and worry about societal harm. As with other revolutionary advancements in society, there is pressure to adopt these tools to keep up with technology and remain competitive. Before we can bridge this gap, we must understand the status quo.

Students are likely to be early adopters of new technology. By examining their initial experiences, we can gain insights into concerns faced by young adults about to enter an AI-integrated workplace. We perform an online survey of 86 students, faculty, and staff at our university, focused on security and privacy concerns affecting chatbot use. We found that participants are well aware of the risks of data harvesting and inaccurate responses, and remain cautious in their use of AI in sensitive contexts, which we unpack in later sections.

Research Question
What security and privacy concerns do students at a large public US university have with adopting generative AI, and how can we overcome them?

The Self-Destructive Nature of Dark Patterns: Revealing Negative Impacts on Usability and Trust in Service Providers

Toi Kojima, Shizuoka University; Hiromi Arai, RIKEN AIP; Masakatsu Nishigaki, Shizuoka University; Tetsushi Ohki, Shizuoka University/RIKEN AIP

Available Media

"Dark patterns,'' deceptive designs that intentionally lead users to take actions benefiting service providers, are widely used, especially in digital marketing. The major impacts of dark patterns includes time or money costs incurred by deceived users. However, there are other possible unintended impacts on the user experience. In particular, users who recognize and avoid dark patterns (non-deceived users) may also experience stress and frustration from the extra time and effort required. In this study, we focus on non-deceived users and examines the negative usability impact caused by avoiding dark patterns. Through this usability study using web pages containing dark patterns, we explored the possibility that the cost incurred by avoiding dark patterns may be a factor that undermines trust in service providers.

Know What You're Doing: Understanding the Security (Mis)conceptions of Cloud Technology Workforce in Bangladesh

Mashiyat Mahjabin Eshita, Ishmam Bin Rofi, S M Taiabul Haque, and Jannatun Noor, BRAC University

Available Media

Cloud security and privacy awareness are crucial for safeguarding information stored in the cloud. Despite their operational proficiency with cloud technologies, the Bangladeshi cloud technology workforce demonstrates gaps in understanding the cloud security associated with inadequate knowledge. This study surveyed 24 members of this workforce to assess their perceptions and knowledge concerning cloud privacy and security. Our findings reveal a fundamental lack of understanding, particularly regarding privacy implementation, security tools, design considerations, and the shared responsibility of securing information. These knowledge gaps may potentially expose vulnerabilities and compromise data integrity in the future. This research contributes significant insights into cloud privacy and security among technology professionals in the Global South, an area that remains relatively understudied.

Published Work

Posters of usable security papers published recently at other venues.

Investigating Security Folklore: A Case Study on the Tor over VPN Phenomenon

Matthias Fassl, Alexander Ponticello, Adrian Dabrowski, and Katharina Krombholz, CISPA Helmholtz Center for Information Security

Available Media

Users face security folklore in their daily lives in the form of security advice, myths, and word-of-mouth stories. Using a VPN to access the Tor network, i.e., Tor over VPN, is an interesting example of security folklore because of its inconclusive security benefits and its occurrence in pop-culture media.

Following the Theory of Reasoned Action, we investigated the phenomenon with three studies: (1) we quantified the behavior on real-world Tor traffic and measured a prevalence of 6.23%; (2) we surveyed users’ intentions and beliefs, discovering that they try to protect themselves from the Tor network or increase their general security; and (3) we analyzed online information sources, suggesting that perceived norms and ease-of-use play a significant role while behavioral beliefs about the purpose and effect are less crucial in spreading security folklore. We discuss how to communicate security advice effectively and combat security misinformation and misconceptions.

Everyone for Themselves? A Qualitative Study about Individual Security Setups of Open Source Software Contributors

Sabrina Amft, CISPA Helmholtz Center for Information Security; Sandra Höltervennhoff, Leibniz University Hannover; Rebecca Panskus and Karola Marky, Ruhr University Bochum; Sascha Fahl, CISPA Helmholtz Center for Information Security

Available Media

To increase open-source software supply chain security, protecting the development environment of contributors against attacks is crucial. For example, contributors must protect authentication credentials for software repositories, code-signing keys, and their systems from malware. Previous incidents illustrated that open-source contributors struggle with protecting their development environment. In contrast to companies, open-source software projects cannot easily enforce security guidelines for development environments. Instead, contributors’ security setups are likely heterogeneous regarding chosen technologies and strategies. To the best of our knowledge, we perform the first in-depth qualitative investigation of the security of open-source software contributors’ individual security setups, their motivation, decision-making, and sentiments, and the potential impact on open-source software supply chain security. Therefore, we conduct 20 semi-structured interviews with a diverse set of experienced contributors to critical open-source software projects. Overall, we find that contributors have a generally high affinity for security. However, security practices are rarely discussed in the community or enforced by projects. Furthermore, we see a strong influence of social mechanisms, such as trust, respect, or politeness, further impeding the sharing of security knowledge and best practices. We conclude our work with a discussion of the impact of our findings on open-source software and supply chain security, and make recommendations for the open-source software community.

Examining Human Perception of Generative Content Replacement in Image Privacy Protection

Anran Xu and Shitao Fang, The University of Tokyo; Huan Yang, Microsoft Research; Simo Hosio, University of Oulu; Koji Yatani, The University of Tokyo

Available Media

The richness of the information in photos can often threaten privacy, thus image editing methods are often employed for privacy protection. Existing image privacy protection techniques, like blurring, often struggle to maintain the balance between robust privacy protection and preserving image usability. To address this, we introduce a generative content replacement (GCR) method in image privacy protection, which seamlessly substitutes privacy-threatening contents with similar and realistic substitutes, using state-of-the-art generative techniques. Compared with four prevalent image protection methods, GCR consistently exhibited low detectability, making the detection of edits remarkably challenging. GCR also performed reasonably well in hindering the identification of specific content and managed to sustain the image’s narrative and visual harmony. This research serves as a pilot study and encourages further innovation on GCR and the development of tools that enable human-in-the-loop image privacy protection using approaches similar to GCR.

Expanding Concepts of Non-Consensual Image-Disclosure Abuse: A Study of NCIDA in Pakistan

Amna Batool, University of Michigan

Available Media

Non-Consensual Image-Disclosure Abuse (NCIDA) represents a subset of technology-facilitated sexual abuse where imagery and video with romantic or sexual connotations are used to control, extort, and otherwise harm victims. Despite considerable research on NCIDA, little is known about them in non-Western contexts. We investigate NCIDA in Pakistan, through interviews with victims, their relatives, and investigative officers; and observations of NCIDA cases being processed at a law enforcement agency. We find, first, that what constitutes NCIDA is much broader in Pakistan's patriarchal society, and that its effects can be more severe than in Western contexts. On every dimension—types of content, perpetrators, impact on victims, and desired response by victims—our findings suggest an expansion of the concepts associated with NCIDA. We conclude by making technical and policy-level recommendations, both to address the specific context of Pakistan, and to enable a more global conception of NCIDA.

The Effects of Group Discussion and Role-playing Anti-phishing Training: Evidence from a Mixed-design Experiment

Xiaowei Chen, Margault Sacré, Gabriele Lenzini, and Samuel Greiff, University of Luxembourg; Verena Distler, University of the Bundeswehr Munich; Anastasia Sergeeva, University of Luxembourg

Available Media

Organizations rely on phishing interventions to enhance employees’ vigilance and safe responses to phishing emails that bypass technical solutions. While various resources are available to counteract phishing, studies emphasize the need for interactive and practical training approaches. To investigate the effectiveness of such an approach, we developed and delivered two anti-phishing trainings, group discussion and role-playing, at a European university. We conducted a pre-registered experiment (N = 105), incorporating repeated measures at three time points, a control group, and three in-situ phishing tests. Both trainings enhanced employees’ anti-phishing self-efficacy and support-seeking intention in within-group analyses. Only the role-playing training significantly improved support-seeking intention when compared to the control group. Participants in both trainings reported more phishing tests and demonstrated heightened vigilance to phishing attacks compared to the control group. We discuss practical implications for evaluating and improving phishing interventions and promoting safe responses to phishing threats within organizations.

SoK: Technical Implementation and Human Impact of Internet Privacy Regulations

Eleanor Birrell, Pomona College; Jay Rodolitz, Northeastern University; Angel Ding, Wellesley College; Jenna Lee, University of Washington; Emily McReynolds, Future of Privacy Forum; Jevan Hutson, Hintze Law, PLLC; Ada Lerner, Northeastern University

Available Media

Growing recognition of the potential for exploitation of personal data and of the shortcomings of prior privacy regimes has led to the passage of a multitude of new privacy regulations. Some of these laws - notably the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) - have been the focus of large bodies of research by the computer science community, while others have received less attention. In this work, we analyze a set of 24 privacy laws and data protection regulations drawn from around the world - both those that have frequently been studied by computer scientists and those that have not - and develop a taxonomy of rights granted and obligations imposed by these laws. We then leverage this taxonomy to systematize 270 technical research papers published in computer science venues that investigate the impact of these laws and explore how technical solutions can complement legal protections. Finally, we analyze the results in this space through an interdisciplinary lens and make recommendations for future work at the intersection of computer science and legal privacy.

In Focus, Out of Privacy: The Wearer's Perspective on the Privacy Dilemma of Camera Glasses

Divyanshu Bhardwaj and Alexander Ponticello, CISPA Helmholtz Center for Information Security; Shreya Tomar, Indraprastha Institute of Information Technology; Adrian Dabrowski and Katharina Krombholz, CISPA Helmholtz Center for Information Security

Awarded Distinguished Poster!

Available Media

The rising popularity of camera glasses challenges societal norms of recording bystanders and thus requires efforts to mediate privacy preferences. We present the first study on the wearers’ perspectives and explore privacy challenges associated with wearing camera glasses when bystanders are present. We conducted a micro-longitudinal diary study (N = 15) followed by exit interviews with existing users and people without prior experience. Our results show that wearers consider the currently available privacy indicators ineffective. They believe the looks and interaction design of the glasses conceal the technology from unaware people. Due to the lack of effective privacy-mediating measures, wearers feel emotionally burdened with preserving bystanders’ privacy. We furthermore elicit how this sentiment impacts their usage of camera glasses and highlight the need for technical and non-technical solutions. Finally, we compare the wearers’ and bystanders’ perspectives and discuss the design space of a future privacy-preserving ecosystem for wearable cameras.

Shortchanged: Uncovering and Analyzing Intimate Partner Financial Abuse in Consumer Complaints

Arkaprabha Bhattacharya, Cornell University; Kevin Lee and Vineeth Ravi, JPMorgan Chase; Jessica Staddon, Northeastern University; Rosanna Bellini, Cornell University

Available Media

Digital financial services can introduce new digital-safety risks for users, particularly survivors of intimate partner financial abuse (IPFA). To offer improved support for such users, a comprehensive understanding of their support needs and the barriers they face to redress by financial institutions is essential. Drawing from a dataset of 2.7 million customer complaints, we implement a bespoke workflow that utilizes language-modeling techniques and expert human review to identify complaints describing IPFA. Our mixed-method analysis provides insight into the most common digital financial products involved in these attacks, and the barriers consumers report encountering when doing so. Our contributions are twofold; we offer the first human-labeled dataset for this overlooked harm and provide practical implications for technical practice, research, and design for better supporting and protecting survivors of IPFA.

Exploring Design Opportunities for Family-Based Privacy Education in Informal Learning Spaces

Lanjing Liu, Virginia Tech; Lan Gao, University of Chicago; Nikita Soni, University of Illinois Chicago; Yaxing Yao, Virginia Tech

Available Media

Children face increasing privacy risks and the need to navigate complex choices, while privacy education is not sufficient due to limited education scope and family involvement. We advocate for informal learning spaces (ILS) as a pioneering channel for family-based privacy education, given their established role in holistic technology and digital literacy education, which specifically targets family groups. In this paper, we conducted an interview study with eight families to understand revealing current approaches to privacy education and engagement with ILS for family-based learning. Our findings highlight ILS's transformative potential in family privacy education, considering existing practices and challenges. We discuss the design opportunities for family-based privacy education in ILS, covering goals, content, engagement, and experience design. These insights contribute to future research on family-based privacy education in ILS.

"I Know I'm Being Observed:" Video Interventions to Educate Users about Targeted Advertising on Facebook.

Garrett Smith and Sarah Carson, Brigham Young University; Rhea Vengurlekar, Bentley University; Stephanie Morales, Yun-Chieh Tsai, Rachel George, Josh Bedwell, and Trevor Jones, Brigham Young University; Mainack Mondal, IIT Kharagpur; Brian Smith, Brigham Young University; Norman Makoto Su, UC Santa Cruz; Bart Knijnenburg, Clemson University; Xinru Page, Brigham Young University

Available Media

Recent work explores how to educate and encourage users to protect their online privacy. We tested the efficacy of short videos for educating users about targeted advertising on Facebook. We designed a video that utilized an emotional appeal to explain risks associated with targeted advertising (fear appeal), and which demonstrated how to use the associated ad privacy settings (digital literacy). We also designed a version of this video which additionally showed the viewer their personal Facebook ad profile, facilitating personal reflection on how they are currently being profiled (reflective learning). We conducted an experiment (n = 127) in which participants watched a randomly assigned video and measured the impact over the following 10 weeks. We found that these videos significantly increased user engagement with Facebook advertising preferences, especially for those who viewed the reflective learning content. However, those who only watched the fear appeal content were more likely to disengage with Facebook as a whole.

Mental Models, Expectations and Implications of Client-Side Scanning: An Interview Study with Experts

Divyanshu Bhardwaj, CISPA Helmholtz Center for Information Security; Carolyn Guthoff, CISPA Helmholtz Center for Information Security, Saarland University; Adrian Dabrowski, Sascha Fahl, and Katharina Krombholz, CISPA Helmholtz Center for Information Security

Available Media

Client-Side Scanning (CSS) is discussed as a potential solution to contain the dissemination of child sexual abuse material (CSAM). A significant challenge associated with this debate is that stakeholders have different interpretations of the capabilities and frontiers of the concept and its varying implementations. In this paper, we explore stakeholders’ understandings of the technology and the expectations and potential implications in the context of CSAM by conducting and analyzing 28 semi-structured interviews with a diverse sample of experts. We identified mental models of CSS and the expected challenges. Our results show that CSS is often a preferred solution in the child sexual abuse debate due to the lack of an alternative. Our findings illustrate the importance of further interdisciplinary discussions to define and comprehend the impact of CSS usage on society, particularly vulnerable groups such as children.