SOUPS 2018 Technical Sessions

View the Full Schedule

The full SOUPS 2018 schedule, including the workshops, is available on the Program at a Glance page.
All sessions will be held in Grand Ballroom V unless otherwise noted.

Papers and Proceedings

The full Proceedings published by USENIX for the conference are available for download below. Individual papers can also be downloaded from the presentation page. Copyright to the individual works is retained by the author[s].

Proceedings Front Matter
Proceedings Cover | Title Page and List of Organizers | Table of Contents | Message from the Program Co-Chairs

Full Proceedings PDFs
 SOUPS 2018 Full Proceedings (PDF)
 SOUPS 2018 Proceedings Interior (PDF, best for mobile devices)

Full Proceedings ePub (for iPad and most eReaders)
 SOUPS 2018 Full Proceedings (ePub)

Full Proceedings Mobi (for Kindle)
 SOUPS 2018 Full Proceedings (Mobi)

Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)

Attendee Files 
SOUPS 2018 Attendee List (PDF)
SOUPS 2018 Proceedings Web Archive (ZIP)

Sunday, August 12, 2018

6:00 pm–7:00 pm

SOUPS 2018 Poster Session

Check out the cool new ideas and the latest preliminary research on display at the SOUPS 2018 Poster Session. View the list of accepted posters.

Monday, August 13, 2018

8:00 am-9:00 am

Continental Breakfast

Grand Ballroom Foyer

9:00 am–9:30 am

Welcome and Awards Presentations

General Chair: Mary Ellen Zurko, MIT Lincoln Laboratory, and Vice General Chair, Heather Richter Lipford, University of North Carolina at Charlotte

9:30 am–10:30 am

Keynote Address

Session Chair: Adam Aviv, U.S. Naval Academy

Beyond the Individual: Usability, Utility and Community

Susan McGregor, Assistant Director of the Tow Center for Digital Journalism and Assistant Professor at Columbia Journalism School

Available Media

When we conduct usability studies to assess our technologies, we typically work with groups of individuals in order to identify specific design choices that can be adjusted for improved performance. But the usability of a tool goes hand in hand with its utility, a characteristic that is often influenced by broader contexts, including participants' various community affiliations and their roles within them. By broadening our thinking about usability to include these larger social contexts, we can create technologies that are more usable in part because they are also genuinely more useful to the groups they were designed for. Drawing on published research as well her own work developing secure technologies and workflows for journalists, artists and activists, McGregor's talk will address the theory behind this approach to usability, as well as the practical methods and challenges to implementing it in original research contexts.

Susan McGregor, Columbia Journalism School

Susan McGregor is Assistant Director of the Tow Center for Digital Journalism and Assistant Professor at Columbia Journalism School, where she helps supervise the dual-degree program in Journalism and Computer Science. She teaches primarily in areas of data journalism and information visualization, with research interests in information security, privacy, knowledge management, and alternative forms of digital distribution. McGregor was the Senior Programmer on the News Graphics team at the Wall Street Journal Online for four years before joining Columbia Journalism School in 2011.

McGregor was named a 2010 Gerald Loeb Award winner for her work on WSJ’s "What They Know" series, and a finalist for the Scripps Howard Foundation National Journalism Awards for Web Reporting in 2007. Her work has also been nominated for two Webby awards, in 2011 and 2015. She has published multiple papers in leading peer-reviewed security and privacy conferences on how these issues manifest in and impact the work of journalists. Her research and development work in this and related areas has received support from the National Science Foundation, the Knight Foundation, Google, and multiple schools and offices of Columbia University.

In addition to her technical and academic work, McGregor is actively interested in how the arts can help stimulate critical thinking and introduce new perspectives around technology issues, occasionally creating small prototypes and installations. She holds a master's degree in Educational Communication and Technology from NYU and a bachelor's degree in Interactive Information Design from Harvard University.

10:30 am–11:00 am

Break with Refreshments

Grand Ballroom Foyer

11:00 am–12:30 pm

User Authentication

Session Chair: Janne Lindqvist, Rutgers University

Replication Study: A Cross-Country Field Observation Study of Real World PIN Usage at ATMs and in Various Electronic Payment Scenarios

Melanie Volkamer, Karlsruhe Institute of Technology (KIT) and Technische Universität Darmstadt; Andreas Gutmann, OneSpan Innovation Centre and University College London; Karen Renaud, Abertay University, University of South Africa, and University of Glasgow; Paul Gerber, Technische Universität Darmstadt; Peter Mayer, Karlsruhe Institute of Technology (KIT) and Technische Universität Darmstadt

Available Media

In this paper, we describe the study we carried out to replicate and extend the field observation study of real world ATM use carried out by De Luca et al., published at the SOUPS conference in 2010. Replicating De Luca et al.'s study, we observed PIN shielding rates at ATMs in Germany. We then extended their research by conducting a similar field observation study in Sweden and the United Kingdom. Moreover, in addition to observing ATM users (withdrawing), we also observed electronic payment scenarios requiring PIN entry. Altogether, we gathered data related to 930 observations. Similar to De Luca et al., we conducted follow-up interviews, the better to interpret our findings. We were able to confirm De Luca et al.'s findings with respect to low PIN shielding incidence during ATM cash withdrawals, with no significant differences between shielding rates across the three countries. PIN shielding incidence during electronic payment scenarios was significantly lower than incidence during ATM withdrawal scenarios in both the United Kingdom and Sweden. Shielding levels in Germany were similar during both withdrawal and payment scenarios. We conclude the paper by suggesting a number of explanations for the differences in shielding that our study revealed.

User Behaviors and Attitudes Under Password Expiration Policies

Hana Habib and Pardis Emami Naeini, Carnegie Mellon University; Summer Devlin, University of California, Berkeley; Maggie Oates, Chelse Swoopes, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University

Available Media

Policies that require employees to update their passwords regularly have become common at universities and government organizations. However, prior work has suggested that forced password expiration might have limited security benefits, or could even cause harm. For example, users might react to forced password expiration by picking easy-to-guess passwords or reusing passwords from other accounts. We conducted two surveys on Mechanical Turk through which we examined people's self-reported behaviors in using and updating workplace passwords, and their attitudes toward four previously studied password-management behaviors, including periodic password changes. Our findings suggest that forced password expiration might not have some of the negative effects that were feared nor positive ones that were hoped for. In particular, our results indicate that participants forced to change passwords did not resort to behaviors that would significantly decrease password security; on the other hand, their self-reported strategies for creating replacement passwords suggest that those passwords were no stronger than the ones they replaced. We also found that repeating security advice causes users to internalize it, even if evidence supporting the advice is scant. Our participants overwhelmingly reported that periodically changing passwords was important for account security, though not as important as other factors that have been more convincingly shown to influence password strength.

The Effectiveness of Fear Appeals in Increasing Smartphone Locking Behavior among Saudi Arabians

Elham Al Qahtani and Mohamed Shehab, University of North Carolina Charlotte; Abrar Aljohani

Available Media

Saudi Arabia has witnessed an exponential growth in smartphone adoption and penetration. This increase has been accompanied with an upward trend in cyber and mobile crimes. This calls to efforts that focus on enhancing the awareness of the public to security-related risks. In this study, we replicated the study performed by Albayram et al. published in SOUPS 2017; however, our study targetted participants in Saudi Arabia. We also investigated different fear appeal video designs that were more suited for this population (customized video, Arabic dubbed, and captions for the original video). The results from the original study, conducted in the United States, showed that 50% of participants in the treatment group and 21% in the control group enabled screen lock. The reason for replicating the original paper was to increase Saudis' awareness regarding the importance of sensitive data, especially with the increasing level of cybercrime. Our results showed that the Saudi-customized video was extremely effective in changing our participants' locking behavior (72.5% of participants enabled the screen lock), based on customized applications and Saudi culture. The dubbed video was the second-most effective (62.5%) locking behavior. Finally, we have illustrated our data comparison analysis in detail.

Action Needed! Helping Users Find and Complete the Authentication Ceremony in Signal

Elham Vaziripour, Justin Wu, Mark O'Neill, Daniel Metro, Josh Cockrell, Timothy Moffett, Jordan Whitehead, Nick Bonner, Kent Seamons, and Daniel Zappala, Brigham Young University

Available Media

The security guarantees of secure messaging applications are contingent upon users performing an authentication ceremony, which typically involves verifying the fingerprints of encryption keys. However, recent lab studies have shown that users are unable to do this without being told in advance about the ceremony and its importance. A recent study showed that even with this instruction, the time it takes users to find and complete the ceremony is excessively long—about 11 minutes. To remedy these problems, we modified Signal to include prompts for the ceremony and also simplified the ceremony itself. To gauge the effect of these changes, we conducted a between-subject user study involving 30 pairs of participants. Our study methodology includes no user training and only a small performance bonus to encourage the secure behavior. Our results show that users are able to both find and complete the ceremony more quickly in our new version of Signal. Despite these improvements, many users are still unsure or confused about the purpose of the authentication ceremony. We discuss the need for better risk communication and methods to promote trust.

12:30 pm–1:45 pm

Luncheon for SOUPS attendees

Grand Ballroom VI

1:45 pm–3:15 pm

Behaviors and Practices

Session Chair: James Nicholson, PaCT Lab, Northumbria University

Informal Support Networks: an investigation into Home Data Security Practices

Norbert Nthala and Ivan Flechais, University of Oxford

Available Media

The widespread and rising adoption of information and communication technology in homes is happening at a time when data security breaches are commonplace. This has resulted in a wave of security awareness campaigns targeting the home computer user. Despite the prevalence of these campaigns, studies have shown poor adoption rates of security measures. This has resulted in proposals for securing data in the home built on interdisciplinary theories and models, but more empirical research needs to be done to understand the practical context, characteristics, and needs of home users in order to rigorously evaluate and inform solutions to home data security.

To address this, we employ a two-part study to explore issues that influence or affect security practices in the home. In the first part, we conduct a qualitative Grounded Theory analysis of 65 semi-structured interviews aimed at uncovering the key factors in home user security practices, and in the second part we conduct a quantitative survey of 1128 participants to validate and generalise our initial findings. We found evidence that security practices in the home are affected by survival/outcome bias; social relationships serve as informal support networks for security in the home; and that people look for continuity of care when they seek or accept security support.

Share and Share Alike? An Exploration of Secure Behaviors in Romantic Relationships

Cheul Young Park, Cori Faklaris, Siyan Zhao, Alex Sciuto, Laura Dabbish, and Jason Hong, Carnegie Mellon University

Available Media

Security design choices often fail to take into account users' social context. Our work is among the first to examine security behavior in romantic relationships. We surveyed 195 people on Amazon Mechanical Turk about their relationship status and account sharing behavior for a cross-section of popular websites and apps (e.g., Netflix, Amazon Prime). We examine differences in account sharing behavior at different stages in a relationship and for people in different age groups and income levels. We also present a taxonomy of sharing motivations and behaviors based on the iterative coding of open-ended responses. Based on this taxonomy, we present design recommendations to support end users in three relationship stages: when they start sharing access with romantic partners; when they are maintaining that sharing; and when they decide to stop. Our findings contribute to the field of usable privacy and security by enhancing our understanding of security and privacy behaviors and needs in intimate social relationships.

Characterizing the Use of Browser-Based Blocking Extensions To Prevent Online Tracking

Arunesh Mathur, Princeton University; Jessica Vitak, University of Maryland, College Park; Arvind Narayanan and Marshini Chetty, Princeton University

Available Media

Browser-based blocking extensions such as Ad blockers and Tracker blockers have provisions that allow users to counter online tracking. While prior research has shown that these extensions suffer from several usability issues, we know little about real world blocking extension use, why users choose to adopt these extensions, and how effectively these extensions protect users against online tracking. To study these questions, we conducted two online surveys examining both users and non-users of blocking extensions. We have three main findings. First, we show both users and non-users of these extensions only possess a basic understanding of online tracking, and that participants' mental models only weakly relate with their behavior to adopt these extensions. Second, we find that each type of blocking extension has a specific primary use associated with it. Finally, we find that users report that extensions only rarely break websites. However when websites break, users only disable their extensions if they trust and are familiar with the website. Based on our findings, we make recommendations for designing better protections against online tracking and outline directions for future work.

Can Digital Face-Morphs Influence Attitudes and Online Behaviors?

Eyal Peer, Bar-Ilan University; Sonam Samat and Alessandro Acquisti, Carnegie Mellon University

Available Media

Self-images are among the most prevalent forms of content shared on social media streams. Face-morphs are images digitally created by combining facial pictures of different individuals. In the case of self-morphs, a person's own picture is combined with that of another individual. Prior research has shown that even when individuals do not recognize themselves in self-morphs, they tend to trust self-morphed faces more, and judge them more favorably. Thus, self-morphs may be used online as covert forms of targeted marketing – for instance, using consumers' pictures from social media streams to create self-morphs, and inserting the resulting self-morphs in promotional campaigns targeted at those consumers. The usage of this type of personal data for highly targeted influence without individuals' awareness, and the type of opaque effect such artifacts may have on individuals' attitudes and behaviors, raise potential issues of consumer privacy and autonomy. However, no research to date has examined the feasibility of using self-morphs for such applications. Research on self-morphs has focused on artificial laboratory settings, raising questions regarding the practical, in-the-wild applicability of reported self-morph effects. In three experiments, we examine whether self-morphs could affect individuals' attitudes or even promote products/services, using a combination of experimental designs and dependent variables. Across the experiments, we test both designs and variables that had been used in previous research in this area and new ones that had not. Questioning prior research, however, we find no evidence that end-users react more positively to self-morphs than control morphs composed of unfamiliar facial pictures in either attitudes or actual behaviors.

3:15 pm–3:45 pm

Lightning Talks

  • The 3rd Wave? Inclusive Privacy and Security
    Yang Wang, Syracuse University
  • Accessible Authentication for All: An Evaluation Framework for Assessing Usability and Accessibility of Authentication
    Ronna ten Brink, MITRE Corporation
  • Just Say No to 'Just Say No': Youth and Privacy Advocacy in Latin America
    Mariel García-Montes, Massachusetts Institute of Technology
  • Reframing Usable Privacy and Security to Design for “Cyber Health”
    Cori Faklaris, Carnegie Mellon University
Available Media

3:45 pm–4:15 pm

Break with Refreshments

Grand Ballroom Foyer

4:15 pm–5:45 pm

Online Privacy

Session Chair: Heather Crawford, Florida Institute of Technology

"Privacy is not for me, it's for those rich women": Performative Privacy Practices on Mobile Phones by Women in South Asia

Nithya Sambasivan and Garen Checkley, Google; Amna Batool, Information Technology University; Nova Ahmed, North South University; David Nemer, University of Kentucky; Laura Sanely Gaytán-Lugo, Universidad de Colima; Tara Matthews, Independent Researcher; Sunny Consolvo and Elizabeth Churchill, Google
Awarded the IAPP SOUPS Privacy Award!

Available Media

Women in South Asian own fewer personal devices like laptops and phones than women elsewhere in the world. Further, cultural expectations dictate that they should share mobile phones with family members and that their digital activities be open to scrutiny by family members. In this paper, we report on a qualitative study conducted in India, Pakistan, and Bangladesh about how women perceive, manage, and control their personal privacy on shared phones. We describe a set of five performative practices our participants employed to maintain individuality and privacy, despite frequent borrowing and monitoring of their devices by family and social relations. These practices involved management of phone and app locks, content deletion, technology avoidance, and use of private modes. We present design opportunities for maintaining privacy on shared devices that are mindful of the social norms and values in the South Asian countries studied, including to improve discovery of privacy controls, offer content hiding, and provide algorithmic understanding of multiple-user use cases. Our suggestions have implications for enhancing the agency of user populations whose social norms shape their phone use.

"You don't want to be the next meme": College Students' Workarounds to Manage Privacy in the Era of Pervasive Photography

Yasmeen Rashidi, Tousif Ahmed, Felicia Patel, Emily Fath, Apu Kapadia, Christena Nippert-Eng, and Norman Makoto Su, Indiana University Bloomington

Available Media

Pervasive photography and the sharing of photos on social media pose a significant challenge to undergraduates' ability to manage their privacy. Drawing from an interview-based study, we find undergraduates feel a heightened state of being surveilled by their peers and rely on innovative workarounds—negotiating the terms and ways in which they will and will not be recorded by technology-wielding others—to address these challenges. We present our findings through an experience model of the life span of a photo, including an analysis of college students' workarounds to deal with the technological challenges they encounter as they manage potential threats to privacy at each of our proposed four stages. We further propose a set of design directions that address our users' current workarounds at each stage. We argue for a holistic perspective on privacy management that considers workarounds across all these stages. In particular, designs for privacy need to more equitably distribute the technical power of determining what happens with and to a photo among all the stakeholders of the photo, including subjects and bystanders, rather than the photographer alone.

Away From Prying Eyes: Analyzing Usage and Understanding of Private Browsing

Hana Habib, Jessica Colnago, Vidya Gopalakrishnan, Sarah Pearman, Jeremy Thomas, Alessandro Acquisti, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University

Available Media

Previous research has suggested that people use the private browsing mode of their web browsers to conduct privacy-sensitive activities online, but have misconceptions about how it works and are likely to overestimate the protections it provides. To better understand how private browsing is used and whether users are at risk, we analyzed browsing data collected from over 450 participants of the Security Behavior Observatory (SBO), a panel of users consenting to researchers observing their daily computing behavior "in the wild" through software monitoring. We explored discrepancies between observed and self-reported private behaviors through a follow-up survey, distributed to both Mechanical Turk and SBO participants. The survey also allowed us to investigate why private browsing is used for certain activities. Our findings reveal that people use private browsing for practical and security reasons, beyond the expected privacy reasons. Additionally, the primary use cases for private browsing were consistent across the reported and empirical data, though there were discrepancies in how frequently private browsing is used for online activities. We conclude that private browsing does mitigate our participants' concerns about their browsing activities being revealed to other users of their computer, but participants overestimate the protection from online tracking and targeted advertising.

Online Privacy and Aging of Digital Artifacts

Reham Ebada Mohamed and Sonia Chiasson, Carleton University

Available Media

This paper explores how the user interface can help users invoke the right to be forgotten in social media by decaying content. The decaying of digital artifacts gradually degrades content, thereby becoming less accessible to audiences. Through a lab study of 30 participants, we probe the concept of aging/decaying of digital artifacts. We compared three visualization techniques (pixelating, fading, and shrinking) used to decay social media content on three platforms (Facebook, Instagram, and Twitter). We report results from qualitative and quantitative analysis. Visualizations that most closely reflect how memories fade over time were most effective. We also report on participants' attitudes and concerns about how content decay relates to protection of their online privacy. We discuss the implications of our results and provide preliminary recommendations based on our findings.

Tuesday, August 14, 2018

8:00 am–9:00 am

Continental Breakfast

Grand Ballroom Foyer

9:00 am–10:30 am

Data Exposure, Compromises, and Access

Session Chair: Manya Sleeper, Google

"I've Got Nothing to Lose": Consumers' Risk Perceptions and Protective Actions after the Equifax Data Breach

Yixin Zou, Abraham H. Mhaidli, Austin McCall, and Florian Schaub, School of Information, University of Michigan
Awarded Distinguished Paper!

Available Media

Equifax, one of the three major U.S. credit bureaus, experienced a large-scale data breach in 2017. We investigated consumers' mental models of credit bureaus, how they perceive risks from this data breach, whether they took protective measures, and their reasons for inaction through 24 semi-structured interviews. We find that participants' mental models of credit bureaus are incomplete and partially inaccurate. Although many participants were aware of and concerned about the Equifax breach, few knew whether they were affected, and even fewer took protective measures after the breach. We find that this behavior is not primarily influenced by accuracy of mental models or risk awareness, but rather by costs associated with protective measures, optimism bias in estimating one's likelihood of victimization, sources of advice, and a general tendency towards delaying action until harm has occurred. We discuss legal, technical and educational implications and directions towards better protecting consumers in the credit reporting system.

Data Breaches: User Comprehension, Expectations, and Concerns with Handling Exposed Data

Sowmya Karunakaran, Kurt Thomas, Elie Bursztein, and Oxana Comanescu, Google

Available Media

Data exposed by breaches persist as a security and privacy threat for Internet users. Despite this, best practices for how companies should respond to breaches, or how to responsibly handle data after it is leaked, have yet to be identified. We bring users into this discussion through two surveys. In the first, we examine the comprehension of 551 participants on the risks of data breaches and their sentiment towards potential remediation steps. In the second survey, we ask 10,212 participants to rate their level of comfort towards eight different scenarios that capture real-world examples of security practitioners, researchers, journalists, and commercial entities investigating leaked data. Our findings indicate that users readily understand the risk of data breaches and have consistent expectations for technical and non-technical remediation steps. We also find that participants are comfortable with applications that examine leaked data—such as threat sharing or a "hacked or not'' service—when the application has a direct, tangible security benefit. Our findings help to inform a broader discussion on responsible uses of data exposed by breaches.

User Comfort with Android Background Resource Accesses in Different Contexts

Daniel Votipka and Seth M. Rabin, University of Maryland; Kristopher Micinski, Haverford College; Thomas Gilray, Michelle L. Mazurek, and Jeffrey S. Foster, University of Maryland

Available Media

Android apps ask users to allow or deny access to sensitive resources the first time the app needs them. Prior work has shown that users decide whether to grant these requests based on the context. In this work, we investigate user comfort level with resource accesses that happen in a background context, meaning they occur when there is no visual indication of a resource use. For example, accessing the device location after a related button click would be considered an interactive access, and accessing location whenever it changes would be considered a background access. We conducted a 2,198-participant fractional-factorial vignette study, showing each participant a resource-access scenario in one of two mock apps, varying what event triggers the access (when) and how the collected data is used (why). Our results show that both when and why a resource is accessed are important to users' comfort. In particular, we identify multiple meaningfully different classes of accesses for each these factors, showing that not all background accesses are regarded equally. Based on these results, we make recommendations for how designers of mobile-privacy systems can take these nuanced distinctions into account.

Let Me Out! Evaluating the Effectiveness of Quarantining Compromised Users in Walled Gardens

Orçun Çetin, Lisette Altena, Carlos Gañán, and Michel van Eeten, Delft University of Technology

Available Media

In the fight to clean up malware-infected machines, notifications from Internet Service Providers (ISPs) to their customers play a crucial role. Since stand-alone notifications are routinely ignored, some ISPs have invested in a potentially more effective mechanism: quarantining customers in so-called walled gardens. We present the first empirical study on user behavior and remediation effectiveness of quarantining infected machines in broadband networks. We analyzed 1,736 quarantining actions involving 1,208 retail customers of a medium-sized ISP in the period of April-October 2017. The first two times they are quarantined, users can easily release themselves from the walled garden and around two-thirds of them use this option. Notwithstanding this easy way out, we find that 71% of these users have actually cleaned up the infection during their first quarantine period and, of the recidivists, 48% are cleaned after their second quarantining. Users who do not self-release either contact customer support (30%) or are released automatically after 30 days (3%). They have even higher cleanup rates. Reinfection rates are quite low and most users get quarantined only once. Users that remain infected spend less time in the walled garden during subsequent quarantining events, without a major drop in cleanup rates. This suggests there are positive learning effects, rather than mere habituation to being notified and self-releasing from the walled garden. In the communications with abuse and support staff, a fraction of quarantined users ask for additional help, request a paid technician, voice frustration about being cut off, or threaten to cancel their subscriptions. All in all, walled gardens seem to be a relatively effective and usable mechanism to improve the security of end users. We reflect on our main findings in terms of how to advance this industry best practice for botnet mitigation by ISPs.

10:30 am–11:00 am

Break with Refreshments

Grand Ballroom Foyer

11:00 am–12:30 pm

Developers

Session Chair: Joe Calandrino, Federal Trade Commission

Developers Deserve Security Warnings, Too: On the Effect of Integrated Security Advice on Cryptographic API Misuse

Peter Leo Gorski and Luigi Lo Iacono, Cologne University of Applied Sciences; Dominik Wermke and Christian Stransky, Leibniz University Hannover; Sebastian Möller, Technical University Berlin; Yasemin Acar, Leibniz University Hannover; Sascha Fahl, Ruhr-University Bochum

Available Media

Cryptographic API misuse is responsible for a large number of software vulnerabilities. In many cases developers are overburdened by the complex set of programming choices and their security implications. Past studies have identified significant challenges when using cryptographic APIs that lack a certain set of usability features (e.g. easy-to-use documentation or meaningful warning and error messages) leading to an especially high likelihood of writing functionally correct but insecure code.

To support software developers in writing more secure code, this work investigates a novel approach aimed at these hard-to-use cryptographic APIs. In a controlled online experiment with 53 participants, we study the effectiveness of API-integrated security advice which informs about an API misuse and places secure programming hints as guidance close to the developer. This allows us to address insecure cryptographic choices including encryption algorithms, key sizes, modes of operation and hashing algorithms with helpful documentation in the guise of warnings. Whenever possible, the security advice proposes code changes to fix the responsible security issues. We find that our approach significantly improves code security. 73% of the participants who received the security advice fixed their insecure code.

We evaluate the opportunities and challenges of adopting API-integrated security advice and illustrate the potential to reduce the negative implications of cryptographic API misuse and help developers write more secure code.

Security in the Software Development Lifecycle

Hala Assal and Sonia Chiasson, Carleton University

Available Media

We interviewed developers currently employed in industry to explore real-life software security practices during each stage of the development lifecycle. This paper explores steps taken by teams to ensure the security of their applications, how developers' security knowledge influences the process, and how security fits in (and sometimes conflicts with) the development workflow. We found a wide range of approaches to software security, if it was addressed at all. Furthermore, real-life security practices vary considerably from best practices identified in the literature. Best practices often ignore factors affecting teams' operational strategies. "Division of labour" is one example, whereby complying with best practices would require some teams to restructure and re-assign tasks—an effort typically viewed as unreasonable. Other influential factors include company culture, security knowledge, external pressure, and experiencing a security incident

Deception Task Design in Developer Password Studies: Exploring a Student Sample

Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, and Matthew Smith, University of Bonn, Germany

Available Media

Studying developer behavior is a hot topic for usable security researchers. While the usable security community has ample experience and best-practice knowledge concerning the design of end-user studies, such knowledge is still lacking for developer studies. We know from end-user studies that task design and framing can have significant effects on the outcome of the study. To offer initial insights into these effects for developer research, we extended our previous password storage study (Naiakshina et al. CCS'17). We did so to examine the effects of deception studies with regard to developers. Our results show that there is a huge effect—only 2 out of the 20 non-primed participants even attempted a secure solution, as compared to the 14 out of 20 for the primed participants. In this paper, we will discuss the duration of the task and contrast qualitative vs. quantitative research methods for future developer studies. In addition to these methodological contributions, we also provide further insights into why developers store passwords insecurely.

API Blindspots: Why Experienced Developers Write Vulnerable Code

Daniela Seabra Oliveira, Tian Lin, and Muhammad Sajidur Rahman, University of Florida; Rad Akefirad, Auto1 Inc.; Donovan Ellis, Eliany Perez, and Rahul Bobhate, University of Florida; Lois A. DeLong and Justin Cappos, New York University; Yuriy Brun, University of Massachusetts Amherst; Natalie C. Ebner, University of Florida

Available Media

Despite the best efforts of the security community, security vulnerabilities in software are still prevalent, with new vulnerabilities reported daily and older ones stubbornly repeating themselves. One potential source of these vulnerabilities is shortcomings in the used language and library APIs. Developers tend to trust APIs, but can misunderstand or misuse them, introducing vulnerabilities. We call the causes of such misuse blindspots. In this paper, we study API blindspots from the developers' perspective to: (1) determine the extent to which developers can detect API blindspots in code and (2) examine the extent to which developer characteristics (i.e., perception of code correctness, familiarity with code, confidence, professional experience, cognitive function, and personality) affect this capability. We conducted a study with 109 developers from four countries solving programming puzzles that involve Java APIs known to contain blindspots. We find that (1) The presence of blindspots correlated negatively with the developers' accuracy in answering implicit security questions and the developers' ability to identify potential security concerns in the code. This effect was more pronounced for I/O-related APIs and for puzzles with higher cyclomatic complexity. (2) Higher cognitive functioning and more programming experience did not predict better ability to detect API blindspots. (3) Developers exhibiting greater openness as a personality trait were more likely to detect API blindspots. This study has the potential to advance API security in (1) design, implementation, and testing of new APIs; (2) addressing blindspots in legacy APIs; (3) development of novel methods for developer recruitment and training based on cognitive and personality assessments; and (4) improvement of software development processes (e.g., establishment of security and functionality teams).

12:30 pm–1:45 pm

Luncheon for SOUPS attendees

Grand Ballroom VI

1:45 pm–3:15 pm

Understanding and Mindsets

Session Chair: Marshini Chetty, Princeton University

"If I press delete, it's gone" - User Understanding of Online Data Deletion and Expiration

Ambar Murillo, Andreas Kramm, Sebastian Schnorf, and Alexander De Luca, Google

Available Media

In this paper, we present the results of an interview study with 22 participants and two focus groups with 7 data deletion experts. The studies explored understanding of online data deletion and retention, as well as expiration of user data. We used different scenarios to shed light on what parts of the deletion process users understand and what they struggle with. As one of our results, we identified two major views on how online data deletion works: UI-Based and Backend-Aware (further divided into levels of detail). Their main difference is on whether users think beyond the user interface or not. The results indicate that communicating deletion based on components such as servers or "the cloud" has potential. Furthermore, generic expiration periods do not seem to work while controllable expiration periods are preferred.

Programming Experience Might Not Help in Comprehending Obfuscated Source Code Efficiently

Norman Hänsch, Friedrich-Alexander-Universität Erlangen-Nürnberg; Andrea Schankin, Karlsruhe Institute of Technology; Mykolai Protsenko, Fraunhofer Institute for Applied and Integrated Security; Felix Freiling and Zinaida Benenson, Friedrich-Alexander-Universität Erlangen-Nürnberg

Available Media

Software obfuscation is a technique to protect programs from malicious reverse engineering by explicitly making them harder to understand. We investigate the effect of two specific source code obfuscation methods on the program comprehension efforts of 66 university students playing the role of attackers in a reverse engineering experiment by partially replicating experiments of Ceccatto et al. We confirm that the two obfuscation methods have a measurable negative effect on program comprehension in general but also show that this effect inversely correlates with the programming experience of attackers. So while the comprehension effectiveness of experienced programmers is generally higher than for inexperienced programmers, the comprehension gap between these groups narrows considerably if source code obfuscation is used. In extension of previous work, an investigation of the code analysis behavior of attackers reveals that there exist obfuscation techniques that significantly impede comprehension even if tool support exists to revert them, giving first supportive empirical evidence for the classical distinction between potent and resilient obfuscation techniques defined by Collberg et al. more than 20 years ago.

"We make it a big deal in the company": Security Mindsets in Organizations that Develop Cryptographic Products

Julie M. Haney and Mary F. Theofanos, National Institute of Standards and Technology; Yasemin Acar, Leibniz University Hannover; Sandra Spickard Prettyman, Culture Catalyst

Available Media

Cryptography is an essential component of modern computing. Unfortunately, implementing cryptography correctly is a non-trivial undertaking. Past studies have supported this observation by revealing a multitude of errors and developer pitfalls in the cryptographic implementations of software products. However, the emphasis of these studies was on individual developers; there is an obvious gap in more thoroughly understanding cryptographic development practices of organizations. To address this gap, we conducted 21 in-depth interviews of highly experienced individuals representing organizations that include cryptography in their products. Our findings suggest a security mindset not seen in other research results, demonstrated by strong organizational security culture and the deep expertise of those performing cryptographic development. This mindset, in turn, guides the careful selection of cryptographic resources and informs formal, rigorous development and testing practices. The enhanced understanding of organizational practices encourages additional research initiatives to explore variations in those implementing cryptography, which can aid in transferring lessons learned from more security-mature organizations to the broader development community through educational opportunities, tools, and other mechanisms. The findings also support past studies that suggest that the usability of cryptographic resources may be deficient, and provide additional suggestions for making these resources more accessible and usable to developers of varying skill levels.

A Comparative Usability Study of Key Management in Secure Email

Scott Ruoti, University of Tennessee; Jeff Andersen, Tyler Monson, Daniel Zappala, and Kent Seamons, Brigham Young University

Available Media

We conducted a user study that compares three secure email tools that share a common user interface and differ only by key management scheme: passwords, public key directory (PKD), and identity-based encryption (IBE). Our work is the first comparative (i.e., A/B) usability evaluation of three different key management schemes and utilizes a standard quantitative metric for cross-system comparisons. We also share qualitative feedback from participants that provides valuable insights into user attitudes regarding each key management approach and secure email generally. The study serves as a model for future secure email research with A/B studies, standard metrics, and the two-person study methodology.

3:15 pm–3:45 pm

Lightning Talks

  • http://sillystatistics.org
    Janne Lindqvist, Rutgers University
  • Privacy Goals and Misconceptions Surrounding Privacy Tools
    Francis Djabri, Mozilla
  • Getting Involved in Standards—Seeing Your Work Adopted in the Real World
    Samuel Weiler, Massachusetts Institute of Technology, World Wide Web Consortium
  • User Perceptions of Security and Confidentiality when Performing Tertiary Authentication
    Hervé Saint-Louis, University of Toronto
Available Media

3:45 pm–4:15 pm

Break with Refreshments

Grand Ballroom Foyer

4:15 pm–5:45 pm

Models, Beliefs, and Perceptions

Session Chair: Adam Aviv, U.S. Naval Academy

When is a Tree Really a Truck? Exploring Mental Models of Encryption

Justin Wu and Daniel Zappala, Brigham Young University
Distinguished Paper Award Honorable Mention

Available Media

Mental models are a driving force in the way users interact with systems, and thus have important implications for design. This is especially true for encryption because the cost of mistakes can be disastrous. Nevertheless, until now, mental models of encryption have only been tangentially explored as part of more broadly focused studies. In this work, we present the first directed effort at exploring user perceptions of encryption: both mental models of what encryption is and how it works as well as views on its role in everyday life. We performed 19 semi-structured phone interviews with participants across the United States, using both standard interview techniques and a diagramming exercise where participants visually demonstrated their perception of the encryption process. We identified four mental models of encryption which, though varying in detail and complexity, ultimately reduce to a functional abstraction of restrictive access control and naturally coincide with a model of symmetric encryption. Additionally, we find the impersonal use of encryption to be an important part of participants' models of security, with a widespread belief that encryption is frequently employed by service providers to encrypt data at rest. In contrast, the personal use of encryption is viewed as reserved for illicit or immoral activity, or for the paranoid.

"It's Scary…It's Confusing…It's Dull": How Cybersecurity Advocates Overcome Negative Perceptions of Security

Julie M. Haney and Wayne G. Lutters, University of Maryland, Baltimore County

Available Media

Cyber attacks are on the rise, but individuals and organizations fail to implement basic security practices and technologies. Cybersecurity advocates are security professionals who encourage and facilitate the adoption of these best practices. To be successful, they must motivate their audiences to engage in beneficial security behaviors, often first by overcoming negative perceptions that security is scary, confusing, and dull. However, there has been little prior research to explore how they do so. To address this gap, we conducted an interview study of 28 cybersecurity advocates from industry, higher education, government, and non-profits. Findings reveal that advocates must first establish trust with their audience and address concerns by being honest about risks while striving to be empowering. They address confusion by establishing common ground between security experts and non-experts, educating, providing practical recommendations, and promoting usable security solutions. Finally, to overcome perceptions that security is uninteresting, advocates incentivize behaviors and employ engaging communication techniques via multiple communication channels. This research provides insight into real-world security advocacy techniques in a variety of contexts, permitting an investigation into how advocates leverage general risk communication practices and where they have security-specific innovations. These practices may then inform the design of security interfaces and training. The research also suggests the value of establishing cybersecurity advocacy as a new work role within the security field.

Introducing the Cybersurvival Task: Assessing and Addressing Staff Beliefs about Effective Cyber Protection

James Nicholson, Lynne Coventry, and Pam Briggs, PaCT Lab, Northumbria University

Available Media

Despite increased awareness of cybersecurity incidents and consequences, organisations still struggle to convince employees to comply with information security policies and engage in effective cyber prevention. Here we introduce and evaluate The Cybersurvival Task, a ranking task that highlights cybersecurity misconceptions amongst employees and that serves as a reflective exercise for security experts. We describe an initial deployment and refinement of the task in one organisation and a second deployment and evaluation in another. We show how the Cybersurvival Task could be used to detect ‘shadow security' cultures within an organisation and illustrate how a group discussion about the importance of different cyber behaviours led to the weakening of staff's cybersecurity positions (i.e. more disagreement with experts). We also discuss its use as a tool to inform organisational policy-making and the design of campaigns and training events, ensuring that they are better tailored to specific staff groups and designed to target problematic behaviours.

Ethics Emerging: the Story of Privacy and Security Perceptions in Virtual Reality

Devon Adams, Alseny Bah, and Catherine Barwulor, University of Maryland Baltimore County; Nureli Musaby, James Madison University; Kadeem Pitkin, College of Westchester; Elissa M. Redmiles, University of Maryland

Available Media

Virtual reality (VR) technology aims to transport the user to a virtual world, fully immersing them in an experience entirely separate from the real world. VR devices can use sensor data to draw deeply personal inferences, for example about medical conditions and emotions. Further, VR environments can enable virtual crimes (e.g., theft, assault on virtual representations of the user) from which users have been shown to experience emotional pain similar in magnitude to physical crimes. As such, VR may involve especially sensitive user data and interactions. To effectively mitigate such risks and design for safer experiences, we must understand end-user perceptions of VR risks and how, if at all, developers are considering and addressing those risks. In this paper, we present the first work on VR security and privacy perceptions: a mixed-methods study involving semi-structured interviews with 20 VR users and developers, a survey of VR privacy policies, and an ethics co-design study with VR developers. We establish a foundational understanding of users' concerns about privacy, security, and well-being (both physical and psychological) in VR; raise concerns about the state of VR privacy policies; and contribute a concrete VR developer "code of ethics'', created by developers, for developers.

6:00 pm–7:00 pm

Happy Hour

Grand Ballroom Foyer

7:00 pm–8:00 pm

SOUPS Town Hall Meeting

Grand Ballroom I–IV

The SOUPS Town Hall Meeting is a time for the organizing and steering committees to interact with attendees, listen to concerns, and gather feedback regarding the future of the SOUPS conference and community. Everyone is welcome to attend and participate.