7:30 am–8:30 am |
Thursday |
Continental Breakfast
Ballroom Foyer |
8:30 am–9:00 am |
Thursday |
Session Chair: Mary Ellen Zurko, Cisco Systems
Distinguished Poster Award
Distinguished Paper Award
IAPP SOUPS Privacy Award
John Karat Usable Privacy and Security Student Research Award, sponsored by Cisco
|
9:00 am–10:00 am |
Thursday |
Lorrie Faith Cranor, Carnegie Mellon University
Lorrie Faith Cranor joined the US Federal Trade Commission as Chief Technologist in January 2016. She is on leave from Carnegie Mellon University where she is a Professor of Computer Science and of Engineering and Public Policy, Director of the CyLab Usable Privacy and Security Laboratory (CUPS), and Co-director of the MSIT-Privacy Engineering masters program. She also co-founded Wombat Security Technologies, an information security awareness training company. Cranor has authored over 150 research papers on online privacy and usable security, and has played a central role in establishing the usable privacy and security research community, including her founding of the Symposium on Usable Privacy and Security (SOUPS). Cranor holds a doctorate in Engineering and Policy from Washington University in St. Louis. She is a Fellow of the ACM and IEEE.
Read about Cranor's work at the Commission in the Tech@FTC blog, and follow her on Twitter: @TechFTC.
Lorrie Faith Cranor joined the US Federal Trade Commission as Chief Technologist in January 2016. She is on leave from Carnegie Mellon University where she is a Professor of Computer Science and of Engineering and Public Policy, Director of the CyLab Usable Privacy and Security Laboratory (CUPS), and Co-director of the MSIT-Privacy Engineering masters program. She also co-founded Wombat Security Technologies, an information security awareness training company. Cranor has authored over 150 research papers on online privacy and usable security, and has played a central role in establishing the usable privacy and security research community, including her founding of the Symposium on Usable Privacy and Security (SOUPS). Cranor holds a doctorate in Engineering and Policy from Washington University in St. Louis. She is a Fellow of the ACM and IEEE.
Read about Cranor's work at the Commission in the Tech@FTC blog, and follow her on Twitter: @TechFTC and @lorrietweet.
Lorrie Faith Cranor joined the US Federal Trade Commission as Chief Technologist in January 2016. She is on leave from Carnegie Mellon University where she is a Professor of Computer Science and of Engineering and Public Policy, Director of the CyLab Usable Privacy and Security Laboratory (CUPS), and Co-director of the MSIT-Privacy Engineering masters program. She also co-founded Wombat Security Technologies, an information security awareness training company. Cranor has authored over 150 research papers on online privacy and usable security, and has played a central role in establishing the usable privacy and security research community, including her founding of the Symposium on Usable Privacy and Security (SOUPS). Cranor holds a doctorate in Engineering and Policy from Washington University in St. Louis. She is a Fellow of the ACM and IEEE.
Read about Cranor's work at the Commission in the Tech@FTC blog, and follow her on Twitter: @TechFTC and @lorrietweet.
|
10:00 am–10:30 am |
Thursday |
Break with Refreshments
Ballroom Foyer |
10:30 am–12:00 pm |
Thursday |
Session Chair: Serge Egelman, University of California, Berkeley, and International Computer Science Institute
Adrienne Porter Felt, Robert W. Reeder, Alex Ainslie, Helen Harris, and Max Walker, Google; Christopher Thompson, University of California, Berkeley; Mustafa Embre Acer, Elisabeth Morant, and Sunny Consolvo, Google We propose a new set of browser security indicators, based on user research and an understanding of the design challenges faced by browsers. To motivate the need for new security indicators, we critique existing browser security indicators and survey 1,329 people about Google Chrome's indicators. We then evaluate forty icons and seven complementary strings by surveying thousands of respondents about their perceptions of the candidates. Ultimately, we select and propose three indicators. Our proposed indicators have been adopted by Google Chrome, and we hope to motivate others to update their security indicators as well.
Joel Weinberger and Adrienne Porter Felt, Google When someone decides to ignore an HTTPS error warning, how long should the browser remember that decision? If they return to the website in five minutes, an hour, a day, or a week, should the browser show them the warning again or respect their previous decision? There is no clear industry consensus, with eight major browsers exhibiting four different HTTPS error exception storage policies.
Ideally, a browser would not ask someone about the same warning over and over again. If a user believes the warning is a false alarm, repeated warnings undermine the browser's trustworthiness without providing a security benefit. However, some people might change their mind, and we do not want one security mistake to become permanent.
We evaluated six storage policies with a large-scale, multi- month field experiment. We found substantial differences between the policies and that one of the storage policies achieved more of our goals than the rest. Google Chrome 45 adopted our proposal, and it has proved successful since deployed. Subsequently, we ran Mechanical Turk and Google Consumer Surveys to learn about user expectations for warnings. Respondents generally lacked knowledge about Chrome's new storage policy, but we remain satisfied with our proposal due to the behavioral benefits we have observed in the field.
Bin Liu, Mads Schaarup Andersen, Florian Schaub, Hazim Almuhimedi, Shikun Zhang, Norman Sadeh, Alessandro Acquisti. and Yuvraj Agarwal, Carnegie Mellon University
IAPP SOUPS Privacy Award! Modern smartphone platforms have millions of apps, many of which request permissions to access private data and resources, like user accounts or location. While these smartphone platforms provide varying degrees of control over these permissions, the sheer number of decisions that users are expected to manage has been shown to be unrealistically high. Prior research has shown that users are often unaware of, if not uncomfortable with, many of their permission settings. Prior work also suggests that it is theoretically possible to predict many of the privacy settings a user would want by asking the user a small number of questions. However, this approach has neither been operationalized nor evaluated with actual users before. We report on a field study (n=72) in which we implemented and evaluated a Personalized Privacy Assistant (PPA) with participants using their own Android devices. The results of our study are encouraging. We find that 78.7% of the recommendations made by the PPA were adopted by users. Following initial recommendations on permission settings, participants were motivated to further review and modify their settings with daily “privacy nudges.” Despite showing substantial engagement with these nudges, participants only changed 5.1% of the settings previously adopted based on the PPA’s recommendations. The PPA and its recommendations were perceived as useful and usable. We discuss the implications of our results for mobile permission management and the design of personalized privacy assistant solutions.
Arunesh Mathur, Josefine Engel, Sonam Sobti, Victoria Chang, and Marshini Chetty, University of Maryland, College Park Users often do not install security-related software updates, leaving their devices open to exploitation by attackers. We are beginning to understand what factors affect this software updating behavior but the question of how to improve current software updating interfaces however remains unan swered. In this paper, we begin tackling this question by studying software updating behaviors, designing alternative updating interfaces, and evaluating these designs. We describe a formative study of 30 users' software updating practices, describe the low fidelity prototype we developed to address the issues identified in formative work, and the evaluation of our prototype with 22 users. Our findings suggest that updates interrupt users, users lack sufficient information to decide whether or not to update, and vary in terms of how they want to be notified and provide consent for updates. Based on our study, we make four recommendations to improve desktop updating interfaces and outline socio-technical considerations around software updating that will ultimately affect end-user security.
|
12:00 pm–1:30 pm |
Thursday |
Lunch (on your own)
|
1:30 pm–3:00 pm |
Thursday |
Session Chair: Heather Patterson, Intel Labs and NYU Information Law Institute
Michael Fagan and Mohammad Maifi Hasan Khan, University of Connecticut Usable security researchers have long been interested in what users do to keep their devices and data safe and how that compares to recommendations. Additionally, experts have long debated and studied the psychological underpinnings and motivations for users to do what they do, especially when such behavior is seen as risky, at least to experts. This study investigates user motivations through a survey conducted on Mechanical Turk, which resulted in responses from 290 participants. We use a rational decision model to guide our design, as well as current thought on human motivation in general and in the realm of computer security. Through quantitative and qualitative analysis, we identify key gaps in perception between those who follow common security advice (i.e., update software, use a password manager, use 2FA, change passwords) and those who do not and help explain participants' motivations behind their decisions. Additionally, we find that social considerations are trumped by individualized rationales.
Ashwini Rao, Florian Schaub, Norman Sadeh, and Alessandro Acquisti, Carnegie Mellon University; Ruogu Kang, Facebook Online privacy policies are the primary mechanism for in- forming users about data practices of online services. In practice, users ignore privacy policies as policies are long and complex to read. Since users do not read privacy policies, their expectations regarding data practices of online services may not match a service's actual data practices. Mismatches may result in users exposing themselves to unanticipated privacy risks such as unknowingly sharing personal information with online services. One approach for mitigating privacy risks is to provide simplified privacy notices, in addition to privacy policies, that highlight unexpected data practices. However, identifying mismatches between user expectations and services' practices is challenging. We propose and validate a practical approach for studying Web users' privacy expectations and identifying mismatches with practices stated in privacy policies. We conducted a user study with 240 participants and 16 websites, and identified mismatches in collection, sharing and deletion data practices. We discuss the implications of our results for the design of usable privacy notices, service providers, as well as public policy.
Alain Forget, Sarah Pearman, Jeremy Thomas, Alessandro Acquisti, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University; Serge Egelman and Marian Harbach, International Computer Science Institute; Rahul Telang, Carnegie Mellon University Computer security problems often occur when there are disconnects between users' understanding of their role in computer security and what is expected of them. To help users make good security decisions more easily, we need insights into the challenges they face in their daily computer usage. We built and deployed the Security Behavior Observatory (SBO) to collect data on user behavior and machine configurations from participants' home computers. Combining SBO data with user interviews, this paper presents a qualitative study comparing users' attitudes, behaviors, and understanding of computer security to the actual states of their computers. Qualitative inductive thematic analysis of the interviews produced "engagement" as the overarching theme, whereby participants with greater engagement in computer security and maintenance did not necessarily have more secure computer states. Thus, user engagement alone may not be predictive of computer security. We identify several other themes that inform future directions for better design and research into security interventions. Our findings emphasize the need for better understanding of how users' computers get infected, so that we can more effectively design user-centered mitigations.
Session Chair: Elizabeth Stobert, ETH Zürich
- Scalable Consent and How to Instrument It for Research
Ken Klingenstein, Internet2
- Do System-Generated Notifications and Security Warnings Blur Together? Insights from the Neurobiological Phenomenon of Generalization
Anthony Vance and Bonnie Brinton Anderson, Brigham Young University
- (Demo) PrivySeal: A Smart Privacy Assistant for Installing Cloud Apps
Hamza Harkous, École Polytechnique Fédérale de Lausanne (EPFL)
- Scalable Consent and How to Instrument It for Research
Ken Klingenstein, Internet2
- Do System-Generated Notifications and Security Warnings Blur Together? Insights from the Neurobiological Phenomenon of Generalization
Anthony Vance and Bonnie Brinton Anderson, Brigham Young University
- (Demo) PrivySeal: A Smart Privacy Assistant for Installing Cloud Apps
Hamza Harkous, École Polytechnique Fédérale de Lausanne (EPFL)
|
3:00 pm–3:30 pm |
Thursday |
Break with Refreshments
Ballroom Foyer |
3:30 pm–5:20 pm |
Thursday |
Session Chair: Heather Richter Lipford, University of North Carolina at Charlotte
Wei Bai, Doowon Kim, Moses Namara, and Yichen Qian, University of Maryland, College Park; Patrick Gage Kelley, University of New Mexico; Michelle L. Mazurek, University of Maryland, College Park Many critical communications now take place digitally, but recent revelations demonstrate that these communications can often be intercepted. To achieve true message privacy, users need end-to-end message encryption, in which the communications service provider is not able to decrypt the content. Historically, end-to-end encryption has proven extremely difficult for people to use correctly, but recently tools like Apple’s iMessage and Google’s End-to-End have made it more broadly accessible by using key-directory services. These tools (and others like them) sacrifice some security properties for convenience, which alarms some security experts, but little is known about how average users evaluate these tradeoffs. In a 52-person interview study, we asked participants to complete encryption tasks using both a traditional key-exchange model and a key-directory-based registration model. We also described the security properties of each (varying the order of presentation) and asked participants for their opinions. We found that participants understood the two models well and made coherent assessments about when different tradeoffs might be appropriate. Our participants recognized that the less-convenient exchange model was more secure overall, but found the security of the registration model to be “good enough” for many everyday purposes.
Scott Ruoti and Mark O'Neill, Brigham Young University and Sandia National Laboratories"; Daniel Zappala and Kent Seamons, Brigham Young University This paper reports the results of a survey of 1,976 individuals regarding their opinions on TLS inspection, a controversial technique that can be used for both benevolent and malicious purposes. Responses indicate that participants hold nuanced opinions on security and privacy trade-offs, with most recognizing legitimate uses for the practice, but also concerned about threats from hackers or government surveillance. There is strong support for notification and consent when a system is intercepting their encrypted traffic, although this support varies depending on the situation. A significant concern about malicious uses of TLS inspection is identity theft, and many would react negatively and some would change their behavior if they discovered inspection occurring without their knowledge. We also find that a small but significant number of participants are jaded by the current state of affairs and have lost any expectation of privacy.
Diogo Marques, Universidade de Lisboa; Ildar Muslukhov, University of British Columbia; Tiago Guerreiro, Universidade de Lisboa; Konstantin Beznosov, University of British Columbia; Luís Carriço, Universidade de Lisboa
Distinguished Paper Award! Personal mobile devices keep private information which people other than the owner may try to access. Thus far, it has been unclear how common it is for people to snoop on one another’s devices. Through an anonymity-preserving survey experiment, we quantify the pervasiveness of snooping attacks, defined as "looking through someone else’s phone without their permission." We estimated the 1-year prevalence to be 31% in an online participant pool. Weighted to the U.S. population, the data indicates that 1 in 5 adults snooped on at least one other person’s phone, just in the year before the survey was conducted. We found snooping attacks to be especially prevalent among young people, and among those who are themselves smartphone users. In a follow-up study, we found that, among smartphone users, depth of adoption, like age, also predicts the probability of engaging in snooping attacks. In particular, the more people use their devices for personal purposes, the more likely they are to snoop on others, possibly because they become aware of the sensitive information that is kept, and how to access it. These findings suggest that, all else remaining equal, the prevalence of snooping attacks may grow, as more people adopt smartphones, and motivate further effort into improving defenses.
Alexander De Luca, Google; Sauvik Das, Carnegie Mellon University; Martin Ortlieb, Iulia Ion, and Ben Laurie, Google In this paper, we present results from an online survey with 1,510 participants and an interview study with 31 participants on (secure) mobile instant messaging. Our goal was to uncover how much of a role security and privacy played in people's decisions to use a mobile instant messenger. In the interview study, we recruited a balanced sample of IT security experts and non-experts, as well as an equal split of users of mobile instant messengers that are advertised as being more secure and/or private (e.g., Threema) than traditional mobile IMs. Our results suggest that peer influence is what primarily drives people to use a particular mobile IM, even for secure/private IMs, and that security and privacy play minor roles.
Jeffrey Warshaw, University of California, Santa Cruz; Nina Taft and Allison Woodruff, Google, Inc. Analytic systems increasingly allow companies to draw inferences about users’ characteristics, yet users may not fully understand these systems due to their complex and often unintuitive nature. In this paper, we investigate inference literacy: the beliefs and misconceptions people have about how companies collect and make inferences from their data. We interviewed 21 non-student participants with a high school education, finding that few believed companies can make the type of deeply personal inferences that companies now routinely make through machine learning. Instead, most participant’s inference literacy beliefs clustered around one of two main concepts: one cluster believed companies make inferences about a person based largely on a priori stereotyping, using directly gathered demographic data; the other cluster believed that companies make inferences based on computer processing of online behavioral data, but often expected these inferences to be limited to straightforward intuitions. We also find evidence that cultural models related to income and ethnicity influence the assumptions that users make about their own role in the data economy. We share implications for research, design, and policy on tech savviness, digital inequality, and potential inference literacy interventions.
|
5:20 pm–5:45 pm |
Thursday |
Session Chair: Elizabeth Stobert, ETH Zürich
- Making Two-Secret Key Derivation Work for People (Demo)
Jeffrey Goldberg, AgileBits, Inc.
|
6:00 pm–7:30 pm |
Thursday |
SOUPS 2016 Symposium Reception
Outdoor Plaza
Mingle with fellow attendees in the Outdoor Plaza for the Symposium Reception. Enjoy dinner, drinks, and the chance to connect with other attendees, speakers, and conference organizers. |