Thursday, July 10 |
Session chair: Heather Lipford, University of North Carolina at Charlotte
Allison Woodruff, Vasyl Pihur, Sunny Consolvo, and Lauren Schmidt, Google; Laura Brandimarte and Alessandro Acquisti, Carnegie Mellon University Westin’s Privacy Segmentation Index has been widely used to measure privacy attitudes and categorize individuals into three privacy groups: fundamentalists, pragmatists, and unconcerned. Previous research has failed to establish a robust correlation between the Westin categories and actual or intended behaviors. Unexplored however is the connection between the Westin categories and individuals’ responses to the consequences of privacy behaviors. We use a survey of 884 Amazon Mechanical Turk participants to investigate the relationship between the Westin Privacy Segmentation Index and attitudes and behavioral intentions for both privacy sensitive scenarios and privacy-sensitive consequences. Our results indicate a lack of correlation between the Westin categories and behavioral intent, as well as a lack of correlation between the Westin categories and consequences. We discuss potential implications of this attitude-consequence gap.
Lorrie Faith Cranor, Adam L. Durity, Abigail Marsh, and Blase Ur, Carnegie Mellon University The life of a teenager today is far different than in past decades. Through semi-structured interviews with 10 teenagers and 10 parents of teenagers, we investigate parent-teen privacy decision making in these uncharted waters. Parents and teens generally agreed that teens had a need for some degree of privacy from their parents and that respecting teens’ privacy demonstrated trust and fostered independence. We explored the boundaries of teen privacy in both the physical and digital worlds. While parents commonly felt none of their children’s possessions should ethically be exempt from parental monitoring, teens felt strongly that cell phones, particularly text messages, were private. Parents discussed struggling to keep up with new technologies and to understand teens’ technology-mediated socializing. While most parents said they thought similarly about privacy in the physical and digital worlds, half of teens said they thought about these concepts differently. We present cases where parents made privacy decisions using false analogies with the physical world or outdated assumptions. We also highlight directions for more usable digital parenting tools.
Ruogu Kang, Carnegie Mellon University; Stephanie Brown, Carnegie Mellon University and American University; Laura Dabbish and Sara Kiesler, Carnegie Mellon University Amazon Mechanical Turk (MTurk) is a crowdsourcing platform widely used to conduct behavioral research, including studies of online privacy and security. We studied how well the privacy attitudes of MTurk workers mirror the privacy attitudes of the larger user population. We report results from an MTurk survey of attitudes about managing one’s personal information online and policy preferences about anonymity. We compare these attitudes with those of a representative U.S. adult sample drawn from a separate survey a few months earlier. MTurk respondents were younger and better educated, and more likely to use social media than the representative US adult sample. Although they reported a similar amount of personal information online, U.S. MTurk workers put a higher value on anonymity and hiding information, were more likely to do so, had more privacy concerns than the larger U.S. public. Indian MTurk workers were much less concerned than American workers about their privacy and more tolerant of government monitoring. Our analyses show that these findings hold even when controlling for age, education, gender, and social media use. Our findings suggest that privacy studies using MTurk need to account for differences between MTurk samples and the general population.
Emilee Rader, Michigan State University Internet companies record data about users as they surf the web, such as the links they have clicked on, search terms they have used, and how often they read all the way to the end of an online news article. This evidence of past behavior is aggregated both across websites and across individuals, allowing algorithms to make inferences about users’ habits and personal characteristics. Do users recognize when their behaviors provision information that may be used in this way, and is this knowledge associated with concern about unwanted access to information about themselves they would prefer not to reveal? In this online experiment, the majority of a sample of web-savvy users was aware that Internet companies like Facebook and Google can collect data about their actions on these websites, such as what links they click on. However, this awareness was associated with lower likelihood of concern about unwanted access. Awareness of the potential consequences of data aggregation, such as Facebook or Google knowing what other websites one visits or one’s political party affiliation, was associated with greater likelihood of reporting concern about unwanted access. This suggests that greater transparency about inferences enabled by data aggregation might help users associate seemingly innocuous actions like clicking on a link with what these actions say about them.
|
Session Chair: Mary Ellen Zurko, Cisco Systems
Stefan Korff and Rainer Böhme, Westfälische Wilhelms-Universität Münster Choice proliferation, a research stream in psychology, studies adverse effects of human decision-making as the number of options to choose from increases. We test if these effects can be elicited in a privacy context. Decision field theory suggests two factors that potentially affect end-users’ reflection of disclosure decisions: (1) choice amount, which we test by changing the number of checkboxes in a privacy settings dialog; and (2) choice structure, tested by varying the sensitivity of personal data items which are jointly controlled by each checkbox. We test both factors in a quantitative 2 × 2 between-subject experiment with stimuli calibrated in a pre-study with 60 respondents. In the main experiment, 112 German-speaking university students were asked to enter personal data into an ostensible business networking website and decide if and with whom it should be shared. Using an established item battery, we find that participants who are confronted with a larger amount of privacy options subsequently report more negative feelings, experience more regret, and are less satisfied with the choices made. We observe a similar tendency, albeit weaker and statistically insignificant in our small sample, for the complexity of the choice structure if the number of options remains constant.
Rick Wash, Emilee Rader, Kami Vaniea, and Michelle Rizor, Michigan State University When security updates are not installed, or installed slowly, end users are at an increased risk for harm. To improve security, software designers have endeavored to remove the user from the software update loop. However, user involvement in software updates remains necessary; not all updates are wanted, and required reboots can negatively impact users. We used a multi-method approach to collect interview, survey, and computer log data from 37 Windows 7 users. We compared what the users think is happening on their computers (interview and survey data), what users want to happen on their computer (interview and survey data), and what was actually going on (log data). We found that 28 out of our 37 participants had a misunderstanding about what was happening on their computer, and that over half of the participants could not execute their intentions for computer management.
Cristian Bravo-Lillo, Lorrie Cranor, and Saranga Komanduri, Carnegie Mellon University; Stuart Schechter, Microsoft Research; Manya Sleeper, Carnegie Mellon University At SOUPS 2013, Bravo-Lillo et al. presented an artificial experiment in which they habituated participants to the contents of a pop-up dialog by asking them to respond to it repeatedly, and then measured participants’ ability to notice when a text field within the dialog changed. The experimental treatments included various attractors: interface elements designed to draw or force users’ attention to a text field within the dialog. In all treatments, researchers exposed participants to a large number of repetitions of the dialog before introducing the change that participants were supposed to notice. As a result, Bravo-Lillo et al. could not measure how habituation affects attention, or measure the ability of attractors to counter these effects; they could only compare the performance of attractors under high levels of habituation. We replicate and improve upon Bravo-Lillo et al.’s experiment, adding the lowhabituation conditions essential to measure reductions in attention that result from increasing habituation. In the absence of attractors, increasing habituation caused a three-fold decrease in the proportion of participants who responded to the change in the dialog. As with the prior study, a greater proportion of participants responded to the change in the dialog in treatments using attractors that delayed participants’ ability to dismiss the dialog. We found that, like the control, increasing habituation reduced the proportion of participants who noticed the change with some attractors. However, for the two attractors that forced the user to interact with the text field containing the change, increasing the level of habituation did not decrease the proportion of participants who responded to the change. These attractors appeared resilient to habituation.
Hazim Almuhimedi, Carnegie Mellon University; Adrienne Porter Felt, Robert W. Reeder, and Sunny Consolvo, Google, Inc. Several web browsers, including Google Chrome and Mozilla Firefox, use malware warnings to stop people from visiting infectious websites. However, users can choose to click through (i.e., ignore) these malware warnings. In Google Chrome, users click through a fifth of malware warnings on average. We investigate factors that may contribute to why people ignore such warnings. First, we examine field data to see how browsing history affects click-through rates. We find that users consistently heed warnings about websites that they have not visited before. However, users respond unpredictably to warnings about websites that they have previously visited. On some days, users ignore more than half of warnings about websites they’ve visited in the past. Next, we present results of an online, survey-based experiment that we ran to gain more insight into the effects of reputation on warning adherence. Participants said that they trusted high-reputation websites more than the warnings; however, their responses suggest that a notable minority of people could be swayed by providing more information. We provide recommendations for warning designers and pose open questions about the design of malware warnings.
|
Session chair: Sunny Consolvo, Google
Jay Chen, Michael Paik, and Kelly McCabe, New York University Abu Dhabi Security is predicated, in part, upon the clear understanding of threats and the use of strategies to mitigate these threats. Internet landscapes and the use of the Internet in developing countries are vastly different compared to those in rich countries where technology is more pervasive. In this work, we explore the use of Internet technology throughout urban and peri-urban Ghana and examine attitudes toward security to gauge the extent to which this new population of technology users may be vulnerable to attacks. We find that, like in North America and Europe, the prevalent mental threat model indicates a lack of understanding of how Internet technologies operate. As a result, people rely heavily upon passwords for security online and those who augment their security do so with a variety of ad hoc practices learned by word of mouth. We relate and contrast our findings to previous works and make several recommendations for improving security in these contexts.
Sauvik Das, Tiffany Hyun-Jin Kim, Laura A. Dabbish, and Jason I. Hong, Carnegie Mellon University Despite an impressive effort at raising the general populace’s security sensitivity—the awareness of, motivation to use, and knowledge of how to use security and privacy tools—much security advice is ignored and many security tools remain underutilized. Part of the problem may be that we do not yet understand the social processes underlying people’s decisions to (1) disseminate information about security and privacy and (2) actually modify their security behaviors (e.g., adopt a new security tool or practice). To that end, we report on a retrospective interview study examining the role of social influence—or, our ability to affect the behaviors and perceptions of others with our own words and actions—in people’s decisions to change their security behaviors, as well as the nature of and reasons for their discussions about security. We found that social processes played a major role in a large number of privacy and security-related behavior changes reported by our sample, probably because these processes were effective at raising security sensitivity. We also found that conversations about security were most often driven by the desire to warn or protect others from immediate novel threats observed or experienced, or to gather information about solving an experienced problem. Furthermore, the observability of security feature usage was a key enabler of socially triggered behavior change—both in encouraging the spread of positive behaviors and in discouraging negative behaviors.
Bo Zhang and Na Wang, Pennsylvania State University and Samsung Research America; Hongxia Jin, Samsung Research America Recommender systems (e.g., Amazon.com) provide users with tailored products and services, which have the potential to induce user privacy concerns. Although system designers have been actively developing algorithms to introduce user control mechanisms, it remains unclear whether such control is effective in alleviating privacy concerns. It also is unclear how data type affects this relationship. To determine the psychological mechanisms of user privacy concerns in a recommender system, we conducted a scenario-based online experiment (N = 385). Users’ privacy concerns were measured in relation to different data input (explicit vs. implicit) and control (present vs. absent) scenarios. Results show that a control mechanism can effectively reduce users’ concerns over implicit user data input (i.e., purchase history) but not over explicit user data input (i.e., product ratings). We also demonstrate that control can influence privacy concerns via users’ perceived value of disclosure. These findings question the effectiveness of user control mechanisms in recommender systems with explicit data input. Additionally, our item categorization provides a reference for future personalized recommendations and future analyses.
Heather Rosoff, Jinshu Cui, and Richard John, University of Southern California We conducted two scenario-simulation behavioral experiments to explore individual users’ response to common cyber-based financial fraud and identity theft attacks depend on systematically manipulated variables related to characteristics of the attack and the attacker. Experiment I employed a 4 by 2 between-groups factorial design, manipulating attacker characteristics (individual with picture vs. individual vs. group vs. unknown) and attack mode (acquiring a bank database vs. obtaining personal bank account information) in response to a bank letter scenario notifying respondents of a data breach. Respondents’ positive and negative affect, perceived risk, behavioral intention and attitude towards the government’s role in cyber security were measured. Results suggest that respondents experienced greater negative affect when the attacker was an individual, as well as experienced more positive affect when the attack target was an individual bank account. In addition, a picture of an individual attacker increased intended behavioral changes and expectations of the bank to manage the response in the bank database attacks only. Experiment II utilized a 4 by 3 between-groups factorial design, manipulating attacker motivation (fame vs. money vs. terrorism vs. unknown) and attack resolution status (resolved vs. still at risk vs. unknown) in response to an identity theft scenario that evolves over four time points. In this experiment, respondents’ affect, perceived risk and intended short- and long-term behavior were measured at each time point. Results suggest that respondents reported less perceived risk when the attacker’s motivation was to fund terrorism. Respondents also reported lower negative affect and lower perceived risk when the identity theft case was reported as resolved. Respondents also were more willing to pursue longterm behavior changes when the attack outcome was still at risk or unknown. In both experiments, respondents’ sex and age were related to affect, risk perception, and behavioral intentions. The paper also includes discussion of how further understanding of individual user decision making informs policy makers’ design and implementation of cyber security policies related to credit fraud and identity theft.
|
Friday, July 11 |
Session Chair: Joseph Bonneau, Princeton University
Hui Xu, The Chinese University of Hong Kong; Yangfan Zhou, The Chinese University of Hong Kong and MoE Key Laboratory of High Confidence Software Technologies; Michael R. Lyu, The Chinese University of Hong Kong Current smartphones generally cannot continuously authenticate users during runtime. This poses severe security and privacy threats: A malicious user can manipulate the phone if bypassing the screen lock. To solve this problem, our work adopts a continuous and passive authentication mechanism based on a user’s touch operations on the touchscreen. Such a mechanism is suitable for smartphones, as it requires no extra hardware or intrusive user interface. We study how to model multiple types of touch data and perform continuous authentication accordingly. As a first attempt, we also investigate the fundamentals of touch operations as biometrics by justifying their distinctiveness and permanence. A onemonth experiment is conducted involving over 30 users. Our experiment results verify that touch biometrics can serve as a promising method for continuous and passive authentication.
Jialiu Lin, Bin Liu, Norman Sadeh, and Jason I. Hong, Carnegie Mellon University In this paper, we investigate the feasibility of identifying a small set of privacy profiles as a way of helping users manage their mobile app privacy preferences. Our analysis does not limit itself to looking at permissions people feel comfortable granting to an app. Instead it relies on static code analysis to determine the purpose for which an app requests each of its permissions, distinguishing for instance between apps relying on particular permissions to deliver their core functionality and apps requesting these permissions to share information with advertising networks or social networks. Using privacy preferences that reflect people’s comfort with the purpose for which different apps request their permissions, we use clustering techniques to identify privacy profiles. A major contribution of this work is to show that, while people’s mobile app privacy preferences are diverse, it is possible to identify a small number of privacy profiles that collectively do a good job at capturing these diverse preferences.
Marian Harbach, Leibniz University Hannover; Emanuel von Zezschwitz, Andreas Fichtner, and Alexander De Luca, University of Munich (LMU); Matthew Smith, Rheinische Friedrich-Wilhelms-Universität A lot of research is being conducted into improving the usability and security of phone-unlocking. There is however a severe lack of scientific data on users’ current unlocking behavior and perceptions. We performed an online survey (n = 260) and a one-month field study (n = 52) to gain insights into real world (un)locking behavior of smartphone users. One of the main goals was to find out how much overhead unlocking and authenticating adds to the overall phone usage and in how many unlock interactions security (i.e. authentication) was perceived as necessary. We also investigated why users do or do not use a lock screen and how they cope with smartphone-related risks, such as shouldersurfing or unwanted accesses. Among other results, we found that on average, participants spent around 2.9% of their smartphone interaction time with authenticating (9% in the worst case). Participants that used a secure lock screen like PIN or Android unlock patterns considered it unnecessary in 24.1% of situations. Shoulder surfing was perceived to be a relevant risk in only 11 of 3410 sampled situations.
|
Session chair: Sonia Chiasson, Carleton University
S M Taiabul Haque, Shannon Scielzo, and Matthew Wright, The University of Texas at Arlington As mobile devices become increasingly common for accessing services online, the security of these services in turn depends more on password entry on these devices. Unfortunately, users are not comfortable with existing textual password entry mechanisms on mobile phone handsets. In this study, we investigate this issue of user comfort from the viewpoint of psychometrics. By applying standard techniques of psychometrics, we develop a questionnaire (known as a scale in psychometrics) that measures the comfort of constructing a strong password when using a particular interface. We establish the essential psychometric properties (reliability and validity) of this scale and demonstrate how the scale can be used to profile password construction interfaces of popular smartphone handsets. We also theoretically conceptualize user comfort across different dimensions and use confirmatory factor analysis to verify our theory. Finally, we highlight several issues related to scale development and discuss how psychometric approaches may be useful in general for measuring various subjective concepts that are related to usable security.
Elizabeth Stobert and Robert Biddle, Carleton University Users need to keep track of many accounts and passwords. We conducted a series of interviews to investigate how users cope with these demanding tasks, and used Grounded Theory to analyze the interview results. We found that most users cope by reusing passwords and writing them down, but with a rich variety of behaviour and diverse personalized strategies. These approaches seem to disregard security advice, but at a detailed level they involve perceptive behaviour and careful self-management of user resources. We identify a password life cycle that follows users’ password behaviour and how it develops over time as users adapt to changing circumstances and demands. Users’ strategies have their limitations, but we suggest they indicate a rational response to the requirements of password authentication. We suggest that instead of simply advising against such behaviour, new approaches could be designed that harness existing user behaviour while limiting negative consequences.
Saurabh Panjwani, Independent Consultant; Achintya Prakash, University of Michigan We introduce a new approach for attacking and analyzing biometric-based authentication systems, which involves crowdsourcing the search for potential impostors to the system. Our focus is on voice-based authentication, or speaker verification (SV), and we propose a generic method to use crowdsourcing for identifying candidate “mimics” for speakers in a given target population. We then conduct a preliminary analysis of this method with respect to a well-known text-independent SV scheme (the GMM-UBM scheme) using Mechanical Turk as the crowdsourcing platform.
Our analysis shows that the new attack method can identify mimics for target speakers with high impersonation success rates: from a pool of 176 candidates, we identified six with an overall false acceptance rate of 44%, which is higher than what has been reported for professional mimics in prior voice-mimicry experiments. This demonstrates that naïve, untrained users have the potential to carry out impersonation attacks against voice-based systems, although good imitators are rare to find. (We also implement our method with a crowd of amateur mimicry artists and obtain similar results for them.) Match scores for our best mimics were found to be lower than those for automated attacks but, given the relative difficulty of detecting mimicry attacks vis-á-vis automated ones, our method presents a potent threat to real systems. We discuss implications of our results for the security analysis of SV systems (and of biometric systems, in general) and highlight benefits and challenges associated
|
Session chair: Yang Wang, Syracuse University
Mainack Mondal, Max Planck Institute for Software Systems (MPI-SWS); Yabing Liu, Northeastern University; Bimal Viswanath and Krishna P. Gummadi, Max Planck Institute for Software Systems (MPI-SWS); Alan Mislove, Northeastern University Online social network (OSN) users upload millions of pieces of content to share with others every day. While a significant portion of this content is benign (and is typically shared with all friends or all OSN users), there are certain pieces of content that are highly privacy sensitive. Sharing such sensitive content raises significant privacy concerns for users, and it becomes important for the user to protect this content from being exposed to the wrong audience. Today, most OSN services provide fine-grained mechanisms for specifying social access control lists (social ACLs, or SACLs), allowing users to restrict their sensitive content to a select subset of their friends. However, it remains unclear how these SACL mechanisms are used today. To design better privacy management tools for users, we need to first understand the usage and complexity of SACLs specified by users.
In this paper, we present the first large-scale study of finegrained privacy preferences of over 1,000 users on Facebook, providing us with the first ground-truth information on how users specify SACLs on a social networking service. Overall, we find that a surprisingly large fraction (17.6%) of content is shared with SACLs. However, we also find that the SACL membership shows little correlation with either profile information or social network links; as a result, it is difficult to predict the subset of a user’s friends likely to appear in a SACL. On the flip side, we find that SACLs are often reused, suggesting that simply making recent SACLs available to users is likely to significantly reduce the burden of privacy management on users.
Hootan Rashtian, Yazan Boshmaf, Pooya Jaferian, and Konstantin Beznosov, University of British Columbia Accepting friend requests from strangers in Facebook-like online social networks is known to be a risky behavior. Still, empirical evidence suggests that Facebook users often accept such requests with high rate. As a first step towards technology support of users in their decisions about friend requests, we investigate why users accept such requests. We conducted two studies of users’ befriending behavior on Facebook. Based on 20 interviews with active Facebook users, we developed a friend request acceptance model that explains how various factors influence user acceptance behavior. To test and refine our model, we also conducted a confirmatory study with 397 participants using Amazon Mechanical Turk. We found that four factors significantly impact the receiver’s decision, namely, knowing the requester’s in real world, having common hobbies or interests, having mutual friends, and the closeness of mutual friends. Based on our findings, we offer design guidelines for improving the usability of the corresponding user interfaces.
Pooya Jaferian, Hootan Rashtian, and Konstantin Beznosov, University of British Columbia This work addresses the problem of reviewing complex access policies in an organizational context using two studies. In the first study, we used semi-structured interviews to explore the access review activity and identify its challenges. The interviews revealed that access review involves challenges such as scale, technical complexity, the frequency of reviews, human errors, and exceptional cases. We also modeled access review in the activity theory framework. The model shows that access review requires an understanding of the activity context including information about the users, their job, their access rights, and the history of access policy. We then used activity theory guidelines to design a new user interface named AuthzMap. We conducted an exploratory user study with 340 participants to compare the use of AuthzMap with two existing commercial systems for access review. The results show that AuthzMap improved the efficiency of access review in 5 of the 7 tested scenarios, compared to the existing systems. AuthzMap also improved accuracy of actions in one of the 7 tasks, and only negatively affected accuracy in one of the tasks.
|