9:00 a.m.–9:05 a.m. |
Tuesday |
Program Chair: Patrick Traynor, Georgia Institute of Technology
|
9:05 a.m.–10:35 a.m. |
Tuesday |
Session Chair: Matt Blaze, University of Pennsylvania
Robert Templeman, Indiana University Bloomington and Naval Surface Warfare Center, Crane Division; Apu Kapadia, Indiana University Bloomington
Flash memory is used for non-volatile storage in a vast array of devices that touch users at work, at home, and at play. Flash memory offers many desirable characteristics, but its key weakness is limited write endurance. Endurance limits continue to decrease as smaller integrated circuit architectures and greater storage densities are pursued. There is a significant body of published work demonstrating methods to extend flash endurance under normal use, but performance of these methods under malicious use has not been adequately researched.
We introduce GANGRENE, an attack to accelerate wear of flash devices to induce premature failure. By testing a sampling of flash drives, we show that wear can be accelerated by an order of magnitude. Our results offer evidence that vendor-provided endurance ratings, based on normal use, ignore this underlying vulnerability. Because of the high penetration of flash memory, the threat of such attacks deserves attention by vendors and researchers in the community. We propose recommendations and mitigations for GANGRENE and suggest future work to address such vulnerabilities.
Stefano Ortolani, Vrije Universiteit, Amsterdam; Bruno Crispo, University of Trento, Trento
Keyloggers are a prominent class of malicious software that surreptitiously logs all the user activity. Traditional approaches aim to eradicate this threat by either preventing or detecting their deployment. In this paper, we take a new perspective to this problem: we explore the possibility of tolerating the presence of a keylogger, while making no assumption on the keylogger internals or the system state. The key idea is to confine the user keystrokes in a noisy event channel flooded with artificially generated activity. Our technique allows legitimate applications to transparently recover the original user keystrokes, while any deployed keylogger is exposed to a stream of data statistically indistinguishable from random noise. We evaluate our solution in realistic settings and prove the soundness of our noise model. We also verify that the overhead introduced is acceptable and has no significant impact on the user experience.
Shane S. Clark, Benjamin Ransford, and Kevin Fu, University of Massachusetts, Amherst
The trend toward energy-proportional computing, in which power consumption scales closely with workload, is making computers increasingly vulnerable to information leakage via whole-system power analysis. Saving energy is an unqualified boon for computer operators, but this trend has produced an unintentional side effect: it is becoming easier to identify computing activities in power traces because idle-power reduction has lowered the effective noise floor. This paper offers preliminary evidence that the analysis of AC power traces can be both harmful to privacy and beneficial for malware detection, the latter of which may benefit embedded (e.g., medical) devices.
|
10:35 a.m.–11:00 a.m. |
Tuesday |
|
11:00 a.m.–12:30 p.m. |
Tuesday |
Session Chair: Kevin Butler, University of Oregon
Chengyu Song, Paul Royal, and Wenke Lee, Georgia Institute of Technology
To solve the scalability problem introduced by the exponential growth of malware, numerous automated malware analysis techniques have been developed. Unfortunately, all of these approaches make previously unaddressed assumptions that manifest as weaknesses to the tenability of the automated malware analysis process. To highlight this concern, we developed two obfuscation techniques that make the successful execution of a malware sample dependent on the unique properties of the original host it infects. To reinforce the potential for malware authors to leverage this type of analysis resistance, we discuss the Flashback botnet’s use of a similar technique to prevent the automated analysis of its samples.
Saran Neti and Anil Somayaji, Carleton University; Michael E. Locasto, University of Calgary
Although many have recognized the risks of software monocultures, it is not currently clear how much and what kind of diversity would be needed to address these risks. Here we attempt to provide insight into this issue using a simple model of hosts and vulnerabilities connected in a bipartite graph. We use this graph to compute diversity metrics as Renyi entropy and to formulate an anti-coordination game to understand why computer host owners would choose to diversify. Since security isn’t the only factor considered when choosing software in the real world, we propose a slight variation of the popular security wargame Capture the Flag that can serve as a testbed for understanding the utility of diversity as a defense strategy.
Feng Lu, Jiaqi Zhang, and Stefan Savage, University of California, San Diego
The rich nature of modern Web services and the emerging “mash-up” programming model, make it difficult to predict the potential interactions and usage scenarios that can emerge. Moreover, while the potential security implications for individual client browsers have been widely internalized (e.g., XSS, CSRF, etc.) there is less appreciation of the risks posed in the other direction—of user abuse on Web service providers. In particular, we argue that Web services and pieces of services can be easily combined to create entirely new capabilities that may themselves be at odds with the security policies that providers (or the Internet community at large) desire to enforce. As a proof-of-concept we demonstrate a fully-functioning Web proxy service called CloudProxy. Constructed entirely out of pieces of unrelated Google and Facebook functionality, CloudProxy effectively launders a user’s connection through these provider’s resources.
|
12:30 p.m.–2:00 p.m. |
Tuesday |
|
2:00 p.m.–3:30 p.m. |
Tuesday |
Session Chair: William Enck, North Carolina State University
Adrienne Porter Felt, Serge Egelman, Matthew Finifter, Devdatta Akhawe, and David Wagner, University of California, Berkeley
Application platforms provide applications with access to hardware (e.g., GPS and cameras) and personal data. Modern platforms use permission systems to protect access to these resources. The nature of these permission systems vary widely across platforms. Some platforms obtain user consent as part of installation, while others display runtime consent dialogs. We propose a set of guidelines to aid platform designers in determining the most appropriate permission-granting mechanism for a given permission. We apply our proposal to a smartphone platform. A preliminary evaluation indicates that our model will reduce the number of warnings presented to users, thereby reducing habituation effects.
Zheng Dong and L. Jean Camp, Indiana University
Peer production and crowdsourcing have been widely implemented to create various types of goods and services. Although successful examples such as Linux and Wikipedia have been established in other domains, experts have paid little attention to peer-produced systems in computer security beyond collaborative recommender and intrusion detection systems. In this paper we present a new approach for security system design targeting a set of non-technical, self-organized communities. We argue that unlike many current security implementations (which suffer from low rates of adoption), individuals would have greater incentives to participate in a security community characterized by peer production. A specific design framework for peer production and crowd-sourcing are introduced. One high-level security scenario (on mitigation of insider threats) is then provided as an example implementation. Defeating the insider threat was chosen as an example implementation because it has been framed as a strictly (and inherently) firm-produced good. We argue that use of peer production and crowd-sourcing will increase network security in the aggregate.
Mohit Tiwari, Prashanth Mohan, Andrew Osheroff, and Hilfi Alkaff, University of California, Berkeley; Elaine Shi, University of Maryland, College Park; Eric Love, Dawn Song, and Krste Asanović, University of California, Berkeley
Users today are unable to use the rich collection of third-party untrusted applications without risking significant privacy leaks. In this paper, we argue that current and proposed applications and data-centric security policies do not map well to users’ expectations of privacy. In the eyes of a user, applications and peripheral devices exist merely to provide functionality and should have no place in controlling privacy. Moreover, most users cannot handle intricate security policies dealing with system concepts such as labeling of data, application permissions and virtual machines. Not only are current policies impenetrable to most users, they also lead to security problems such as privilege-escalation attacks and implicit information leaks.
Our key insight is that users naturally associate data with real-world events, and want to control access at the level of human contacts. We introduce Bubbles, a context-centric security system that explicitly captures user’s privacy desires by allowing human contact lists to control access to data clustered by real-world events. Bubbles infers information-flow rules from these simple context-centric access-control rules to enable secure use of untrusted applications on users’ data.
We also introduce a new programming model for untrusted applications that allows them to be functional while still upholding the users’ privacy policies. We evaluate the model’s usability by porting an existing medical application and writing a calendar app from scratch. Finally, we show the design of our system prototype running on Android that uses bubbles to automatically infer all dangerous permissions without any user intervention. Bubbles prevents Android-style permission escalation attacks without requiring users to specify complex information flow rules.
|
3:30 p.m.–4:00 p.m. |
Tuesday |
|
4:00 p.m.–5:00 p.m. |
Tuesday |
Session Chair: Ian Goldberg, University of Waterloo
Markus Jakobsson, Paypal; Mayank Dhiman, PEC University of Technology
We study passwords from the perspective of how they are generated, with the goal of better understanding how to distinguish good passwords from bad ones. Based on reviews of large quantities of passwords, we argue that users produce passwords using a small set of rules and types of components, both of which we describe herein. We build a parser of passwords, and show how this can be used to gain a better understanding of passwords, as well as to block weak passwords.
Robert J. Walls, Shane S. Clark, and Brian Neil Levine, University of Massachusetts, Amherst
The price of Internet services is user information, and many pay it without hesitation. While myriad privacy tools exist that thwart the detailed compilation of information about user habits, these tools often assume that reduced functionality is always justified by increased privacy. In contrast, we propose the adoption of functional privacy as a guiding principle in the development of new privacy tools. Functional privacy has the overarching goal of maintaining all functionality while improving privacy as much as practically possible — rather than forcing users to make decisions about tradeoffs that they may not fully understand. As a concrete example of a functional privacy approach, we implemented Milk, a Google Chrome extension that automatically rewrites HTTP cookies to strictly bind them to the first-party domains from which they were set. We also identify existing privacy-preserving tools that we believe embody the principle of functional privacy and discuss the limitations of others.
|