SOUPS 2016 Program

The full Proceedings published by USENIX for the conference are available for download below. Individual papers can also be downloaded from the presentation page. Copyright to the individual works is retained by the author[s].

Proceedings Front Matter
Proceedings Cover | Title Page and List of Organizers | Table of Contents | Message from the Program Chairs

Full Proceedings PDFs
 SOUPS 2016 Full Proceedings (PDF)
 SOUPS 2016 Proceedings Interior (PDF, best for mobile devices)
 SOUPS 2016 Errata Slip (PDF)

Full Proceedings ePub (for iPad and most eReaders)
 SOUPS 2016 Full Proceedings (ePub)

Full Proceedings Mobi (for Kindle)
 SOUPS 2016 Full Proceedings (Mobi)

Downloads for Registered Conference Attendees

Attendee Files 
SOUPS 2016 Proceedings Archive (ZIP)
SOUPS 2016 Attendee List (PDF)

 

Wednesday, June 22, 2016

7:30 am–8:30 am Wednesday

Continental Breakfast

Ballroom Foyer

8:30 am–10:00 am Wednesday
10:00 am–10:30 am Wednesday

Break with Refreshments

Ballroom Foyer

10:30 am–12:10 pm Wednesday

Full-Day Session

Denver Ballroom 1–2

(Continued from previous session)

Full-Day Session

Denver Ballroom 3

(Continued from previous session)

Half-Day Session (Morning)

Denver Ballroom 4

(Continued from previous session)

12:10 pm–1:40 pm Wednesday

Lunch (on your own)

1:40 pm–3:00 pm Wednesday

Full-Day Session

Denver Ballroom 1–2

(Continued from previous session)

Full-Day Session

Denver Ballroom 3

(Continued from previous session)

3:00 pm–3:30 pm Wednesday

Break with Refreshments

Ballroom Foyer

3:30 pm–5:00 pm Wednesday

Full-Day Session

Denver Ballroom 1–2

(Continued from previous session)

Full-Day Session

Denver Ballroom 3

(Continued from previous session)

Half Day Session (Afternoon)

Denver Ballroom 5

(Continued from previous session)

5:15 pm–7:00 pm Wednesday

SOUPS 2016 Poster Session and Happy Hour

Colorado Ballroom A–E

Check out the cool new ideas and the latest preliminary work on display at the Poster Session and Happy Hour. Take advantage of an opportunity to mingle with colleagues who may be interested in the same area while enjoying complimentary food and drinks. View the full list of accepted posters.

 

Thursday, June 23, 2016

All sessions will be held in the Denver Ballroom unless otherwise noted.

7:30 am–8:30 am Thursday

Continental Breakfast

Ballroom Foyer

8:30 am–9:00 am Thursday

Welcome and Awards Presentations

Session Chair: Mary Ellen Zurko, Cisco Systems

Distinguished Poster Award
Distinguished Paper Award
IAPP SOUPS Privacy Award
John Karat Usable Privacy and Security Student Research Award, sponsored by Cisco

9:00 am–10:00 am Thursday

Keynote Address

Informing (Public) Policy

Lorrie Faith Cranor, Carnegie Mellon University

Lorrie Faith Cranor joined the US Federal Trade Commission as Chief Technologist in January 2016. She is on leave from Carnegie Mellon University where she is a Professor of Computer Science and of Engineering and Public Policy, Director of the CyLab Usable Privacy and Security Laboratory (CUPS), and Co-director of the MSIT-Privacy Engineering masters program. She also co-founded Wombat Security Technologies, an information security awareness training company. Cranor has authored over 150 research papers on online privacy and usable security, and has played a central role in establishing the usable privacy and security research community, including her founding of the Symposium on Usable Privacy and Security (SOUPS). Cranor holds a doctorate in Engineering and Policy from Washington University in St. Louis. She is a Fellow of the ACM and IEEE.

Read about Cranor's work at the Commission in the Tech@FTC blog, and follow her on Twitter: @TechFTC.

Lorrie Faith Cranor joined the US Federal Trade Commission as Chief Technologist in January 2016. She is on leave from Carnegie Mellon University where she is a Professor of Computer Science and of Engineering and Public Policy, Director of the CyLab Usable Privacy and Security Laboratory (CUPS), and Co-director of the MSIT-Privacy Engineering masters program. She also co-founded Wombat Security Technologies, an information security awareness training company. Cranor has authored over 150 research papers on online privacy and usable security, and has played a central role in establishing the usable privacy and security research community, including her founding of the Symposium on Usable Privacy and Security (SOUPS). Cranor holds a doctorate in Engineering and Policy from Washington University in St. Louis. She is a Fellow of the ACM and IEEE.

Read about Cranor's work at the Commission in the Tech@FTC blog, and follow her on Twitter: @TechFTC and @lorrietweet.

Lorrie Faith Cranor joined the US Federal Trade Commission as Chief Technologist in January 2016. She is on leave from Carnegie Mellon University where she is a Professor of Computer Science and of Engineering and Public Policy, Director of the CyLab Usable Privacy and Security Laboratory (CUPS), and Co-director of the MSIT-Privacy Engineering masters program. She also co-founded Wombat Security Technologies, an information security awareness training company. Cranor has authored over 150 research papers on online privacy and usable security, and has played a central role in establishing the usable privacy and security research community, including her founding of the Symposium on Usable Privacy and Security (SOUPS). Cranor holds a doctorate in Engineering and Policy from Washington University in St. Louis. She is a Fellow of the ACM and IEEE.

Read about Cranor's work at the Commission in the Tech@FTC blog, and follow her on Twitter: @TechFTC and @lorrietweet.

Available Media
10:00 am–10:30 am Thursday

Break with Refreshments

Ballroom Foyer

10:30 am–12:00 pm Thursday

Security Interfaces

Session Chair: Serge Egelman, University of California, Berkeley, and International Computer Science Institute

Rethinking Connection Security Indicators

Adrienne Porter Felt, Robert W. Reeder, Alex Ainslie, Helen Harris, and Max Walker, Google; Christopher Thompson, University of California, Berkeley; Mustafa Embre Acer, Elisabeth Morant, and Sunny Consolvo, Google

We propose a new set of browser security indicators, based on user research and an understanding of the design challenges faced by browsers. To motivate the need for new security indicators, we critique existing browser security indicators and survey 1,329 people about Google Chrome's indicators. We then evaluate forty icons and seven complementary strings by surveying thousands of respondents about their perceptions of the candidates. Ultimately, we select and propose three indicators. Our proposed indicators have been adopted by Google Chrome, and we hope to motivate others to update their security indicators as well.

Available Media

A Week to Remember: The Impact of Browser Warning Storage Policies

Joel Weinberger and Adrienne Porter Felt, Google

When someone decides to ignore an HTTPS error warning, how long should the browser remember that decision? If they return to the website in five minutes, an hour, a day, or a week, should the browser show them the warning again or respect their previous decision? There is no clear industry consensus, with eight major browsers exhibiting four different HTTPS error exception storage policies.

Ideally, a browser would not ask someone about the same warning over and over again. If a user believes the warning is a false alarm, repeated warnings undermine the browser's trustworthiness without providing a security benefit. However, some people might change their mind, and we do not want one security mistake to become permanent.

We evaluated six storage policies with a large-scale, multi- month field experiment. We found substantial differences between the policies and that one of the storage policies achieved more of our goals than the rest. Google Chrome 45 adopted our proposal, and it has proved successful since deployed. Subsequently, we ran Mechanical Turk and Google Consumer Surveys to learn about user expectations for warnings. Respondents generally lacked knowledge about Chrome's new storage policy, but we remain satisfied with our proposal due to the behavioral benefits we have observed in the field.

Available Media

Follow My Recommendations: A Personalized Privacy Assistant for Mobile App Permissions

Bin Liu, Mads Schaarup Andersen, Florian Schaub, Hazim Almuhimedi, Shikun Zhang, Norman Sadeh, Alessandro Acquisti. and Yuvraj Agarwal, Carnegie Mellon University
IAPP SOUPS Privacy Award!

Modern smartphone platforms have millions of apps, many of which request permissions to access private data and resources, like user accounts or location. While these smartphone platforms provide varying degrees of control over these permissions, the sheer number of decisions that users are expected to manage has been shown to be unrealistically high. Prior research has shown that users are often unaware of, if not uncomfortable with, many of their permission settings. Prior work also suggests that it is theoretically possible to predict many of the privacy settings a user would want by asking the user a small number of questions. However, this approach has neither been operationalized nor evaluated with actual users before. We report on a field study (n=72) in which we implemented and evaluated a Personalized Privacy Assistant (PPA) with participants using their own Android devices. The results of our study are encouraging. We find that 78.7% of the recommendations made by the PPA were adopted by users. Following initial recommendations on permission settings, participants were motivated to further review and modify their settings with daily “privacy nudges.” Despite showing substantial engagement with these nudges, participants only changed 5.1% of the settings previously adopted based on the PPA’s recommendations. The PPA and its recommendations were perceived as useful and usable. We discuss the implications of our results for mobile permission management and the design of personalized privacy assistant solutions.

Available Media

"They Keep Coming Back Like Zombies": Improving Software Updating Interfaces

Arunesh Mathur, Josefine Engel, Sonam Sobti, Victoria Chang, and Marshini Chetty, University of Maryland, College Park

Users often do not install security-related software updates, leaving their devices open to exploitation by attackers. We are beginning to understand what factors affect this software updating behavior but the question of how to improve current software updating interfaces however remains unan swered. In this paper, we begin tackling this question by studying software updating behaviors, designing alternative updating interfaces, and evaluating these designs. We describe a formative study of 30 users' software updating practices, describe the low fidelity prototype we developed to address the issues identified in formative work, and the evaluation of our prototype with 22 users. Our findings suggest that updates interrupt users, users lack sufficient information to decide whether or not to update, and vary in terms of how they want to be notified and provide consent for updates. Based on our study, we make four recommendations to improve desktop updating interfaces and outline socio-technical considerations around software updating that will ultimately affect end-user security.

Available Media
12:00 pm–1:30 pm Thursday

Lunch (on your own)

1:30 pm–3:00 pm Thursday

Behavior 1

Session Chair: Heather Patterson, Intel Labs and NYU Information Law Institute

Why Do They Do What They Do?: A Study of What Motivates Users to (Not) Follow Computer Security Advice

Michael Fagan and Mohammad Maifi Hasan Khan, University of Connecticut

Usable security researchers have long been interested in what users do to keep their devices and data safe and how that compares to recommendations. Additionally, experts have long debated and studied the psychological underpinnings and motivations for users to do what they do, especially when such behavior is seen as risky, at least to experts. This study investigates user motivations through a survey conducted on Mechanical Turk, which resulted in responses from 290 participants. We use a rational decision model to guide our design, as well as current thought on human motivation in general and in the realm of computer security. Through quantitative and qualitative analysis, we identify key gaps in perception between those who follow common security advice (i.e., update software, use a password manager, use 2FA, change passwords) and those who do not and help explain participants' motivations behind their decisions. Additionally, we find that social considerations are trumped by individualized rationales.

Available Media

Expecting the Unexpected: Understanding Mismatched Privacy Expectations Online

Ashwini Rao, Florian Schaub, Norman Sadeh, and Alessandro Acquisti, Carnegie Mellon University; Ruogu Kang, Facebook

Online privacy policies are the primary mechanism for in- forming users about data practices of online services. In practice, users ignore privacy policies as policies are long and complex to read. Since users do not read privacy policies, their expectations regarding data practices of online services may not match a service's actual data practices. Mismatches may result in users exposing themselves to unanticipated privacy risks such as unknowingly sharing personal information with online services. One approach for mitigating privacy risks is to provide simplified privacy notices, in addition to privacy policies, that highlight unexpected data practices. However, identifying mismatches between user expectations and services' practices is challenging. We propose and validate a practical approach for studying Web users' privacy expectations and identifying mismatches with practices stated in privacy policies. We conducted a user study with 240 participants and 16 websites, and identified mismatches in collection, sharing and deletion data practices. We discuss the implications of our results for the design of usable privacy notices, service providers, as well as public policy.

Available Media

Do or Do Not, There Is No Try: User Engagement May Not Improve Security Outcomes

Alain Forget, Sarah Pearman, Jeremy Thomas, Alessandro Acquisti, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University; Serge Egelman and Marian Harbach, International Computer Science Institute; Rahul Telang, Carnegie Mellon University

Computer security problems often occur when there are disconnects between users' understanding of their role in computer security and what is expected of them. To help users make good security decisions more easily, we need insights into the challenges they face in their daily computer usage. We built and deployed the Security Behavior Observatory (SBO) to collect data on user behavior and machine configurations from participants' home computers. Combining SBO data with user interviews, this paper presents a qualitative study comparing users' attitudes, behaviors, and understanding of computer security to the actual states of their computers. Qualitative inductive thematic analysis of the interviews produced "engagement" as the overarching theme, whereby participants with greater engagement in computer security and maintenance did not necessarily have more secure computer states. Thus, user engagement alone may not be predictive of computer security. We identify several other themes that inform future directions for better design and research into security interventions. Our findings emphasize the need for better understanding of how users' computers get infected, so that we can more e ffectively design user-centered mitigations.

Available Media

Lightning Talks and Demos

Session Chair: Elizabeth Stobert, ETH Zürich

  • Scalable Consent and How to Instrument It for Research
    Ken Klingenstein, Internet2
  • Do System-Generated Notifications and Security Warnings Blur Together? Insights from the Neurobiological Phenomenon of Generalization
    Anthony Vance and Bonnie Brinton Anderson, Brigham Young University
  • (Demo) PrivySeal: A Smart Privacy Assistant for Installing Cloud Apps 
    Hamza Harkous, École Polytechnique Fédérale de Lausanne (EPFL)
  • Scalable Consent and How to Instrument It for Research
    Ken Klingenstein, Internet2
  • Do System-Generated Notifications and Security Warnings Blur Together? Insights from the Neurobiological Phenomenon of Generalization
    Anthony Vance and Bonnie Brinton Anderson, Brigham Young University
  • (Demo) PrivySeal: A Smart Privacy Assistant for Installing Cloud Apps 
    Hamza Harkous, École Polytechnique Fédérale de Lausanne (EPFL)
Available Media
3:00 pm–3:30 pm Thursday

Break with Refreshments

Ballroom Foyer

3:30 pm–5:20 pm Thursday

Encryption and Surveillance

Session Chair: Heather Richter Lipford, University of North Carolina at Charlotte

An Inconvenient Trust: User Attitudes toward Security and Usability Tradeoffs for Key-Directory Encryption Systems

Wei Bai, Doowon Kim, Moses Namara, and Yichen Qian, University of Maryland, College Park; Patrick Gage Kelley, University of New Mexico; Michelle L. Mazurek, University of Maryland, College Park

Many critical communications now take place digitally, but recent revelations demonstrate that these communications can often be intercepted. To achieve true message privacy, users need end-to-end message encryption, in which the communications service provider is not able to decrypt the content. Historically, end-to-end encryption has proven extremely difficult for people to use correctly, but recently tools like Apple’s iMessage and Google’s End-to-End have made it more broadly accessible by using key-directory services. These tools (and others like them) sacrifice some security properties for convenience, which alarms some security experts, but little is known about how average users evaluate these tradeo ffs. In a 52-person interview study, we asked participants to complete encryption tasks using both a traditional key-exchange model and a key-directory-based registration model. We also described the security properties of each (varying the order of presentation) and asked participants for their opinions. We found that participants understood the two models well and made coherent assessments about when different tradeoffs might be appropriate. Our participants recognized that the less-convenient exchange model was more secure overall, but found the security of the registration model to be “good enough” for many everyday purposes.

Available Media

User Attitudes Toward the Inspection of Encrypted Traffic

Scott Ruoti and Mark O'Neill, Brigham Young University and Sandia National Laboratories"; Daniel Zappala and Kent Seamons, Brigham Young University

This paper reports the results of a survey of 1,976 individuals regarding their opinions on TLS inspection, a controversial technique that can be used for both benevolent and malicious purposes. Responses indicate that participants hold nuanced opinions on security and privacy trade-offs, with most recognizing legitimate uses for the practice, but also concerned about threats from hackers or government surveillance. There is strong support for notification and consent when a system is intercepting their encrypted traffic, although this support varies depending on the situation. A significant concern about malicious uses of TLS inspection is identity theft, and many would react negatively and some would change their behavior if they discovered inspection occurring without their knowledge. We also find that a small but significant number of participants are jaded by the current state of affairs and have lost any expectation of privacy.

Available Media

Snooping on Mobile Phones: Prevalence and Trends

Diogo Marques, Universidade de Lisboa; Ildar Muslukhov, University of British Columbia; Tiago Guerreiro, Universidade de Lisboa; Konstantin Beznosov, University of British Columbia; Luís Carriço, Universidade de Lisboa
Distinguished Paper Award!

Personal mobile devices keep private information which people other than the owner may try to access. Thus far, it has been unclear how common it is for people to snoop on one another’s devices. Through an anonymity-preserving survey experiment, we quantify the pervasiveness of snooping attacks, defined as "looking through someone else’s phone without their permission." We estimated the 1-year prevalence to be 31% in an online participant pool. Weighted to the U.S. population, the data indicates that 1 in 5 adults snooped on at least one other person’s phone, just in the year before the survey was conducted. We found snooping attacks to be especially prevalent among young people, and among those who are themselves smartphone users. In a follow-up study, we found that, among smartphone users, depth of adoption, like age, also predicts the probability of engaging in snooping attacks. In particular, the more people use their devices for personal purposes, the more likely they are to snoop on others, possibly because they become aware of the sensitive information that is kept, and how to access it. These findings suggest that, all else remaining equal, the prevalence of snooping attacks may grow, as more people adopt smartphones, and motivate further e ffort into improving defenses.

Available Media

Expert and Non-Expert Attitudes towards (Secure) Instant Messaging

Alexander De Luca, Google; Sauvik Das, Carnegie Mellon University; Martin Ortlieb, Iulia Ion, and Ben Laurie, Google

In this paper, we present results from an online survey with 1,510 participants and an interview study with 31 participants on (secure) mobile instant messaging. Our goal was to uncover how much of a role security and privacy played in people's decisions to use a mobile instant messenger. In the interview study, we recruited a balanced sample of IT security experts and non-experts, as well as an equal split of users of mobile instant messengers that are advertised as being more secure and/or private (e.g., Threema) than traditional mobile IMs. Our results suggest that peer influence is what primarily drives people to use a particular mobile IM, even for secure/private IMs, and that security and privacy play minor roles.

Available Media

Intuitions, Analytics, and Killing Ants: Inference Literacy of High School-educated Adults in the US

Jeffrey Warshaw, University of California, Santa Cruz; Nina Taft and Allison Woodruff, Google, Inc.

Analytic systems increasingly allow companies to draw inferences about users’ characteristics, yet users may not fully understand these systems due to their complex and often unintuitive nature. In this paper, we investigate inference literacy: the beliefs and misconceptions people have about how companies collect and make inferences from their data. We interviewed 21 non-student participants with a high school education, finding that few believed companies can make the type of deeply personal inferences that companies now routinely make through machine learning. Instead, most participant’s inference literacy beliefs clustered around one of two main concepts: one cluster believed companies make inferences about a person based largely on a priori stereotyping, using directly gathered demographic data; the other cluster believed that companies make inferences based on computer processing of online behavioral data, but often expected these inferences to be limited to straightforward intuitions. We also find evidence that cultural models related to income and ethnicity influence the assumptions that users make about their own role in the data economy. We share implications for research, design, and policy on tech savviness, digital inequality, and potential inference literacy interventions.

Available Media
5:20 pm–5:45 pm Thursday

Lightning Talks and Demos

Session Chair: Elizabeth Stobert, ETH Zürich

  • Making Two-Secret Key Derivation Work for People (Demo)
    Jeffrey Goldberg, AgileBits, Inc.
6:00 pm–7:30 pm Thursday

SOUPS 2016 Symposium Reception

Outdoor Plaza

Mingle with fellow attendees in the Outdoor Plaza for the Symposium Reception. Enjoy dinner, drinks, and the chance to connect with other attendees, speakers, and conference organizers.

Friday, June 24, 2016

All sessions will be held in the Denver Ballroom unless otherwise noted.

7:30 am–8:30 am Friday

Continental Breakfast

8:30 am–10:00 am Friday

Authentication

Session Chair: Sascha Fahl, CISPA, Saarland University

Understanding Password Choices: How Frequently Entered Passwords Are Re-used across Websites

Rick Wash and Emilee Rader, Michigan State University; Ruthie Berman, Macalester College; Zac Wellmer, Michigan State University

From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.

Available Media

A Study of Authentication in Daily Life

Shrirang Mare, Dartmouth College; Mary Baker, HP Labs; Jeremy Gummeson, Disney Research

We report on a wearable digital diary study of 26 participants that explores people's daily authentication behavior across a wide range of targets (phones, PCs, websites, doors, cars, etc.) using a wide range of authenticators (passwords, PINs, physical keys, ID badges, fingerprints, etc.). Our goal is to gain an understanding of how much of a burden different kinds of authentication place on people, so that we can evaluate what kinds of improvements would most benefit them. We found that on average 25% of our participants' authentications employed physical tokens such as car keys, which suggests that token-based authentication, in addition to password authentication, is a worthy area for improvement. We also found that our participants' authentication behavior and opinions about authentication varied greatly, so any particular solution might not please everyone. We observed a surprisingly high (3–12%) false reject rate across many types of authentication. We present the design and implementation of the study itself, since wearable digital diary studies may prove useful for others exploring similar topics of human behavior. Finally, we provide an example use of participants' logs of authentication events as simulation workloads for investigating the possible energy consumption of a "universal authentication" device.

Available Media

Use the Force: Evaluating Force-Sensitive Authentication for Mobile Devices

Katharina Krombholz, SBA Research and Ruhr-University Bochum; Thomas Hupperich and Thorsten Holz, Ruhr-University Bochum

Modern, off-the-shelf smartphones provide a rich set of possible touchscreen interactions, but knowledge-based authentication schemes still rely on simple digit or character input. Previous studies examined the shortcomings of such schemes based on unlock patterns, PINs, and passcodes.

In this paper, we propose to integrate pressure-sensitive touchscreen interactions into knowledge-based authentication schemes. By adding a (practically) invisible, pressure-sensitive component, users can select stronger PINs that are harder to observe for a shoulder surfer. We conducted a within-subjects design lab study (n = 50) to compare our approach termed force-PINs with standard four-digit and six-digit PINs regarding their usability performance and a comprehensive security evaluation. In addition, we conducted a field study that demonstrated lower authentication overhead. Finally, we found that force-PINs let users select higher entropy PINs that are more resilient to shoulder surfing attacks with minimal impact on the usability performance.

Available Media

Ask Me Again But Don't Annoy Me: Evaluating Re-authentication Strategies for Smartphones

Lalit Agarwal, Hassan Khan, and Urs Hengartner, University of Waterloo

Re-authenticating users may be necessary for smartphone authentication schemes that leverage user behaviour, device context, or task sensitivity. However, due to the unpredictable nature of re-authentication, users may get annoyed when they have to use the default, non-transparent authentication prompt for re-authentication. We address this concern by proposing several re-authentication configurations with varying levels of screen transparency and an optional time delay before displaying the authentication prompt. We conduct user studies with 30 participants to evaluate the usability and security perceptions of these configurations. We find that participants respond positively to our proposed changes and utilize the time delay while they are anticipating to get an authentication prompt to complete their current task. Though our findings indicate no differences in terms of task performance against these configurations, we find that the participants' preferences for the configurations are context-based. They generally prefer the re-authentication configuration with a non-transparent background for sensitive applications, such as banking and photo apps, while their preferences are inclined towards convenient, usable configurations for medium and low sensitive apps or while they are using their devices at home. We conclude with suggestions to improve the design of our proposed configurations as well as a discussion of guidelines for future implementations of re-authentication schemes.

Available Media
10:00 am–10:30 am Friday

Break with Refreshments

Ballroom Foyer

10:30 am–11:40 am Friday

Behavior 2

Session Chair: Alain Forget, Google

Turning Contradictions into Innovations or: How We Learned to Stop Whining and Improve Security Operations

Sathya Chandran Sundaramurthy, University of South Florida; John McHugh, RedJack, LLC; Xinming Ou, University of South Florida; Michael Wesch and Alexandru G. Bardas, Kansas State University; S. Raj Rajagopalan, Honeywell Labs

Efforts to improve the efficiency of security operation centers (SOCs) have emphasized building tools for analysts or understanding the human and organizational factors involved. The importance of viewing the viability of a solution from multiple perspectives has been largely ignored. Multiple perspectives arise because of inherent con icts among the objectives a SOC has to meet and differences between the goals of the parties involved. During the 3.5 years that we have used anthropological fieldwork methods to study SOCs, we discovered that successful SOC innovations must resolve these conflicts to be effective in improving operational efficiency. This discovery was guided by Activity Theory (AT), which provided a framework for analyzing our fieldwork data. We use the version of AT proposed by Engestrom to model SOC operations. Template analysis, a qualitative data analysis technique, guided by AT validated the existence of contradictions in SOCs. The same technique was used to elicit from the data concrete contradictions and how they were resolved. Our analysis provide evidence of the importance of conflict resolution as a prerequisite for operations improvement. AT enabled us to understand why some of our innovations worked in the SOCs we studied (and why others failed). AT helps us see a potentially successful and repeatable mechanism for introducing new technologies to future SOCs. Understanding and supporting all of the spoken and unspoken requirements of SOC analysts and managers appears to be the only way to get new technologies accepted and used in SOCs.

Available Media

Productive Security: A Scalable Methodology for Analysing Employee Security Behaviours

Adam Beautement, Ingolf Becker, Simon Parkin, Kat Krol, and Angela Sasse, University College London

Organisational security policies are often written without sufficiently taking in to account the goals and capabilities of the employees that must follow them. Effective security management requires that security managers are able to assess the effectiveness of their policies, including their impact on employee behaviour. We present a methodology for gathering large scale data sets on employee behaviour and attitudes via scenario-based surveys. The survey questions are grounded in rich data drawn from interviews, and probe perceptions of security measures and their impact. Here we study employees of a large multinational company, demonstrating that our approach is capable of determining important differences between various population groups. We also report that our work has been used to set policy within the partner organisation, illustrating the real-world impact of our research.

Available Media

Lightning Talks and Demos

Session Chair: Elizabeth Stobert, ETH Zürich

  • Situation Awareness as a Whole System Concept
    Robert Gutzwiller, Space and Naval Warfare System Center Pacific
  • Comics as a Medium for Privacy Notices
    Bart P. Knijnenburg, Clemson University
  • Developers Are Users Too: Helping Developers Write Privacy-Preserving and Secure (Android) Code
    Sascha Fahl, Saarland University
  • Easy, Secure Passwords: Sparse Two-Dimensional Authentication
    Sarah Helble, Johns Hopkins Applied Physics Laboratory
  • Situation Awareness as a Whole System Concept
    Robert Gutzwiller, Space and Naval Warfare System Center Pacific
  • Comics as a Medium for Privacy Notices
    Bart P. Knijnenburg, Clemson University
  • Developers Are Users Too: Helping Developers Write Privacy-Preserving and Secure (Android) Code
    Sascha Fahl, Saarland University
  • Easy, Secure Passwords: Sparse Two-Dimensional Authentication
    Sarah Helble, Johns Hopkins Applied Physics Laboratory
Available Media
11:40 am–1:10 pm Friday

SOUPS 2016 Symposium Luncheon

Colorado Ballroom A–D

1:10 pm–2:40 pm Friday

Panel

Hard Problems in Security, Privacy, and Usability: Issues from the Field

Moderator: Tim McKay, Kaiser Permanente Information Technology

Panelists: Kurt Andersen, LinkedIn; Dave Crocker, Brandenburg InternetWorking; Reed Gelzer, MD, Trustworthy EHR; Viki Maxwell, Kaiser Permanente

2:40 pm–3:10 pm Friday

SOUPS 2016 Ice Cream Social

Ballroom Foyer

3:10 pm–4:40 pm Friday

Privacy

Session Chair: Rick Wash, Michigan State University

Forgetting in Social Media: Understanding and Controlling Longitudinal Exposure of Socially Shared Data

Mainack Mondal and Johnnatan Messias, Max Planck Institute for Software Systems (MPI-SWS); Saptarshi Ghosh, Indian Institute of Engineering Science and Technology, Shibpur; Krishna P. Gummadi, Max Planck Institute for Software Systems (MPI-SWS); Aniket Kate, Purdue University

On most online social media sites today, user-generated data remains accessible to allowed viewers unless and until the data owner changes her privacy preferences. In this paper, we present a large-scale measurement study focussed on understanding how users control the longitudinal exposure of their publicly shared data on social media sites. Our study, using data from Twitter, finds that a significant fraction of users withdraw a surprisingly large percentage of old publicly shared data—more than 28% of six-year old public posts (tweets) on Twitter are not accessible today. The inaccessible tweets are either selectively deleted by users or withdrawn by users when they delete or make their accounts private. We also found a significant problem with the current exposure control mechanisms—even when a user deletes her tweets or her account, the current mechanisms leave traces of residual activity, i.e., tweets from other users sent as replies to those deleted tweets or accounts still remain accessible. We show that using this residual information one can recover significant information about the deleted tweets or even characteristics of the deleted accounts. To the best of our knowledge, we are the first to study the information leakage resulting from residual activities of deleted tweets and accounts. Finally, we propose an exposure control mechanism that eliminates information leakage via residual activities, while still allowing meaningful social interactions with user posts. We discuss its merits and drawbacks compared to existing mechanisms.

Available Media

Sharing Health Information on Facebook: Practices, Preferences, and Risk Perceptions of North American Users

Sadegh Torabi and Konstantin Beznosov, University of British Columbia

Motivated by the benefits, people have used a variety of webbased services to share health information (HI) online. Among these services, Facebook, which enjoys the largest population of active subscribers, has become a common place for sharing various types of HI. At the same time, Facebook was shown to be vulnerable to various attacks, resulting in unintended information disclosure, privacy invasion, and information misuse. As such, Facebook users face the dilemma of benefiting from HI sharing and risking their privacy.

In this work, we investigate HI sharing practices, preferences, and risk perceptions among Facebook users. We interviewed 21 participants with chronic health conditions to identify the key factors that influence users’ motivation to share HI on Facebook. We then conducted an online survey with 492 Facebook users in order to validate, refine, and extend our findings.

While some factors related to sharing HI were found in literature, we provide a deeper understanding of the main factors that influenced users’ motivation to share HI on Facebook. The results suggest that the gained benefits from prior HI sharing experiences, and users' overall attitudes toward privacy, correlate with their motivation to disclose HI. Furthermore, we identify other factors, specifically users' perceived health and the audience of the shared HI, that appear to be linked with users' motivation to share HI. Finally, we suggest design improvements—such as anonymous identity as well as search and recommendation features—for facilitating HI sharing on Facebook and similar sites.

Available Media

How Short Is Too Short? Implications of Length and Framing on the Effectiveness of Privacy Notices

Joshua Gluck, Florian Schaub, Amy Friedman, Hana Habib, Norman Sadeh, Lorrie Faith Cranor, and Yuvraj Agarwal, Carnegie Mellon University

Privacy policies are often too long and difficult to understand, and are therefore ignored by users. Shorter privacy notices with clearer wording may increase users’ privacy awareness, particularly for emerging mobile and wearable devices with small screens. In this paper, we examine the potential of (1) shortening privacy notices, by removing privacy practices that a large majority of users are already aware of, and (2) highlighting the implications of described privacy practices with positive or negative framing. We conducted three online user studies focused on privacy notice design for fitness wearables. Our results indicate that short-form privacy notices can inform users about privacy practices. However, we found no effect from including positive or negative framing in our notices. Finally, we found that removing expected privacy practices from notices sometimes led to less awareness of those practices, without improving awareness of the practices that remained in the shorter notices. Given that shorter notices are typically expected to be more effective, we find the lack of increased awareness of the practices remaining in the notice surprising. Our results suggest that the length of an effective privacy notice may be bounded. We provide an analysis of factors influencing our participants’ awareness of privacy practices and discuss the implications of our findings on the design of privacy notices.

Available Media

Addressing Physical Safety, Security, and Privacy for People with Visual Impairments

Tousif Ahmed, Patrick Shaffer, Kay Connelly, David Crandall, and Apu Kapadia, Indiana University Bloomington

People with visual impairments face a variety of obstacles in their daily lives. Recent work has identified specific physical privacy concerns of this population and explored how emerging technology, such as wearable devices, could help. In this study we investigated their physical safety and security concerns and behaviors by conducting interviews (N=19) with participants who have visual impairments in the greater San Francisco metropolitan area. Our participants' detailed accounts shed light on (1) the safety and security concerns of people with visual impairments in urban environments (such as feared and real instances of assault); (2) their behaviors for protecting physical safety (such as avoidance and mitigation strategies); and (3) refined design considerations for future assistive wearable devices that could enhance their awareness of surrounding threats.

Available Media
4:40 pm Friday

Bon Voyage!