Workshop on Privacy Indicators

All sessions will be held in Denver Ballroom 4 unless otherwise noted.
Papers are available for download below to registered attendees now and to everyone beginning June 22, 2016. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].

Downloads for Registered Attendees

Attendee Files 
Workshop on Privacy Indicators Paper Archive (ZIP)

 

Wednesday, June 22, 2016

7:30 am–8:30 am Wednesday

Continental Breakfast

Ballroom Foyer

8:30 am–8:40 am Wednesday

Welcome

Simson Garfinkel, National Institute of Standards and Technology (NIST)

8:40 am–9:00 am Wednesday
9:00 am–10:00 am Wednesday

Work Session 1: What Should Privacy Indicators Communicate?

Privacy Wedges: Area-Based Audience Selection for Social Network Posts

Frederic Raber, Alexander De Luca, and Moritz Graus, Saarland University

We present Privacy Wedges, a user interface designed to allow users of online social networks to make meaningful decisions on who to share their posts with. By displaying the privacy settings for historical posts, it is possible to visualize them in a meaningful and comprehensive way. We conducted a user study with 26 participants that showed that unwanted disclosure could be significantly reduced compared to the current implementation of Facebook. That is, there were significantly fewer posts shown to friends they were not appropriate for or intended for.

Available Media

Influence of Privacy Attitude and Privacy Cue Framing on Android App Choices

Prashanth Rajivan, Carnegie Mellon University; Jean Camp, Indiana University

Transmission of personally identifiable information from smartphone apps has become ubiquitous as smartphones themselves. Privacy controls currently provided in the form of permissions warnings falls insufficient especially for communicating risk during app installation. Presenting easy to understand privacy risk icons/cues would help people make low risk app choices. However, the human factor requirements for designing such privacy risk icons are largely unknown. Towards this, we conducted a user experiment with 480 participants who made a series of app choices with/without privacy priming and with/without privacy risk communicating icons. Overall, presenting risk communicating icons along with app benefit icons has a significant effect on user app choices in terms of risk-benefit trade-o ff. We found that one type of privacy icon framing leads to mediocre app choices under particular conditions. We found that priming for privacy would lead to increased concern while choosing apps but may not have an augmenting effect on final app choices when combined with certain type of privacy framing. Based on our findings, we conclude with human factor recommendations for designing privacy risk communicating icons.

Available Media

The Privacy Policy Paradox

Rena Coen, Jennifer King, and Richmond Wong, University of California, Berkeley

There have been multiple studies exploring the content and efficacy of privacy policies. However, to date no one has examined them from the angle we are proposing for this study: to determine whether the presence of a privacy policy link on a website has any significant influence on one's willingness to disclose personal information. Our study intends to examine whether the link itself acts as a trust heuristic without testing a respondent's comprehension or opinion about the privacy policy itself. In this paper, we discuss a study currently in progress to examine this question.

Available Media

Reasonable Expectations of Privacy Indicators

Erin Kenneally, International Computer Science Institute and U.S. Department of Homeland Security

The incumbent approach to defining our reasonable expectations of privacy (REP) fails to account for the evolved threats ushered by the Internet context. As a result, the privacy controls it anchors are being applied in an inconsistent, ad hoc, and precarious manner.

This paper briefly introduces a novel approach to domesticate REP and operationalize its application through privacy controls by taking cues from the playbook of network science. This approach re-conceptualizes information privacy as a scale-free network that follows power law dynamics, and it suggests gauging privacy risk by looking at the nature and quality of links and nodes controlling personal information artifacts rather than whether the interface to data is deemed public or private.

Conjecturally, the resulting privacy framework will achieve balance between individual rights protection and public good goals by advocating a regime of reciprocal obligations between the countervailing interests: the recognition of a more nuanced privacy continuum by private information controllers and more overt manifestations of REP by privacy subjects, both steeped in a network theory-informed understanding of information flows.

Available Media
10:00 am–10:30 am Wednesday

Break with Refreshments

Ballroom Foyer

10:30 am–11:00 am Wednesday

Work Session 2: Is Privacy Quantifiable?

Better Privacy Indicators: A New Approach to Quantification of Privacy Policies

Manar Alohaly and Hassan Takabi, University of North Texas

Privacy notice is the statement that contains all data practice of a particular app. Presenting privacy notice as a lengthy text has not been successful as it imposes reading fatigue. Therefore, several design proposals that substitute the classic privacy notice have been employed to different audience and in different contexts as a means to enhance user’s awareness. However, there is still a shortage in having a notice display that helps users shape a coherent idea about app’s data gathering practice and seamlessly allowing them to compare different application alternatives based on their data gathering practices. In this work, we propose an approach to quantify the amount of data collection of an application by analyzing its privacy policy text using natural language processing (NLP) techniques. There are in fact numerous use cases for such a quantitative measure, one of which is designing a visceral notice that relies on an experiential approach to communicate privacy information to users. The results show that our quantification approach holds promise. Using our quantification measure, we propose a new display for nano-sized visceral notice in which we leverage user’s familiarity with pie chart as a data measuring tool to communicate information about an app’s data collection practice.

Available Media

Rating Indicator Criteria for Privacy Policies

Joel R. Reidenberg, N. Cameron Russell, and Thomas B. Norton, Fordham Law School

This short paper describes ongoing research to identify the legal and policy criteria necessary for the development of meaningful privacy rating indicators. Previous and current attempts to provide online privacy rating indicators such as grades and nutrition labels thus far have had only limited success and have not been widely adopted. The purpose of this research is to review the history of online privacy rating indicators, to identify deficiencies and obstacles to meaningful ratings, and to map the requirements or criteria that need to be established in law and policy for indicators to be meaningful and successful.

Available Media

Data-Driven Privacy Indicators

Hamza Harkous, Rameez Rahman, and Karl Aberer, École Polytechnique Fédérale de Lausanne (EPFL)

Third party applications work on top of existing platforms that host users’ data. Although these apps access this data to provide users with specific services, they can also use it for monetization or profiling purposes. In practice, there is a significant gap between users’ privacy expectations and the actual access levels of 3rd party apps, which are often over-privileged. Due to weaknesses in the existing privacy indicators, users are generally not well-informed on what data these apps get. Even more, we are witnessing the rise of inverse privacy: 3rd parties collect data that enables them to know information about users that users do not know, cannot remember, or cannot reach. In this paper, we describe our recent experiences with the design and evaluation of Data-Driven Privacy Indicators (DDPIs), an approach attempting to reduce the aforementioned privacy gap. DDPIs are realized through analyzing user’s data by a trusted party (e.g., the app platform) and integrating the analysis results in the privacy indicator’s interface. We discuss DDPIs in the context of 3rd party apps on cloud platforms, such as Google Drive and Dropbox. Specifically, we present our recent work on Far-reaching Insights, which show users the insights that apps can infer about them (e.g., their topics of interest, collaboration and activity patterns etc.). Then we present History-based insights, a novel privacy indicator which informs the user on what data is already accessible by an app vendor, based on previous app installations by the user or her collaborators. We further discuss future ideas on new DDPIs, and we outline the challenges facing the wide-scale deployment of such indicators.

Available Media
11:00 am–11:30 am Wednesday
11:30 am–12:10 pm Wednesday

Open Discussion: Transitioning Indicators from Research to Practice

Introduction: Simson Garfinkel, National Institute of Standards and Technology (NIST)