Technical Sessions

The full 21st USENIX Security Symposium Proceedings are now available:

August 8, 2012

8:45 a.m.–9:00 a.m. Wednesday

Opening Remarks

Grand A-H

Welcome and Awards

Program Chair: Tadayoshi Kohno, University of Washington

Available Media
9:00 a.m.–10:30 a.m. Wednesday

Keynote Address

Grand A-H

The Evolution of Information Assurance

Speaker: Dickie George, Johns Hopkins Applied Physics Laboratory

This talk will discuss the change in assurance from the cold war of the '60s and '70s to today. The change in communications, the type of information targeted, and the emergence of new classes of adversaries force a huge change in assurance strategy. Mr. George will discuss the government’s threat/adversary model and the assurance models of the last century and this century, and will include some real-life stories that highlight the threat and response.

This talk will discuss the change in assurance from the cold war of the '60s and '70s to today. The change in communications, the type of information targeted, and the emergence of new classes of adversaries force a huge change in assurance strategy. Mr. George will discuss the government’s threat/adversary model and the assurance models of the last century and this century, and will include some real-life stories that highlight the threat and response.

Dickie George currently serves as the Senior Advisor for Cyber Security at the Johns Hopkins University Applied Physics Laboratory. He provides security advice to leadership and the technical workforce, and currently works on a number of projects in support of the U.S. government. He joined the Lab in 2012, following a 41-year career at NSA, the last 8 years of which he served as Technical Director for the Information Assurance Directorate. At NSA, Dickie was a cryptomathematician and author of over 125 cryptanalytic papers.

Available Media
10:30 a.m.–11:00 a.m. Wednesday

Break

Grand Ballroom Foyer

11:00 a.m.–12:30 p.m. Wednesday

Spam and Drugs

Grand ABC

Session Chair: Kevin Fu, University of Massachusetts Amherst

PharmaLeaks: Understanding the Business of Online Pharmaceutical Affiliate Programs

Damon McCoy, George Mason University; Andreas Pitsillidis and Grant Jordan, University of California, San Diego; Nicholas Weaver and Christian Kreibich, University of California, San Diego, and International Computer Science Institute; Brian Krebs, KrebsOnSecurity.com; Geoffrey M. Voelker, Stefan Savage, and Kirill Levchenko, University of California, San Diego

Online sales of counterfeit or unauthorized products drive a robust underground advertising industry that includes email spam, “black hat” search engine optimization, forum abuse and so on. Virtually everyone has encountered enticements to purchase drugs, prescription-free, from an online “Canadian Pharmacy.” However, even though such sites are clearly economically motivated, the shape of the underlying business enterprise is not well understood precisely because it is “underground.” In this paper we exploit a rare opportunity to view three such organizations—the GlavMed, SpamIt and RX-Promotion pharmaceutical affiliate programs—from the inside. Using “ground truth” data sets including four years of raw transaction logs covering over $170 million in sales, we provide an in-depth empirical analysis of worldwide consumer demand, the key role of independent third-party advertisers, and a detailed cost accounting of the overall business model.

Available Media

B@bel: Leveraging Email Delivery for Spam Mitigation

Gianluca Stringhini and Manuel Egele, University of California, Santa Barbara; Apostolis Zarras and Thorsten Holz, Ruhr-University Bochum; Christopher Kruegel and Giovanni Vigna, University of California, Santa Barbara

Traditional spam detection systems either rely on content analysis to detect spam emails, or attempt to detect spammers before they send a message, (i.e., they rely on the origin of the message). In this paper, we introduce a third approach: we present a system for filtering spam that takes into account how messages are sent by spammers. More precisely, we focus on the email delivery mechanism, and analyze the communication at the SMTP protocol level.

We introduce two complementary techniques as concrete instances of our new approach. First, we leverage the insight that different mail clients (and bots) implement the SMTP protocol in slightly different ways. We automatically learn these SMTP dialects and use them to detect bots during an SMTP transaction. Empirical results demonstrate that this technique is successful in identifying (and rejecting) bots that attempt to send emails. Second, we observe that spammers also take into account server feedback (for example to detect and remove non-existent recipients from email address lists). We can take advantage of this observation by returning fake information, thereby poisoning the server feedback on which the spammers rely. The results of our experiments show that by sending misleading information to a spammer, it is possible to prevent recipients from receiving subsequent spam emails from that same spammer.

Available Media

Impact of Spam Exposure on User Engagement

Anirban Dasgupta, Yahoo! Labs;  Kunal Punera, RelateIQ Inc.; Justin M. Rao, Microsoft Research; Xuanhui Wang, Facebook

In this paper we quantify the effect of unsolicited emails (spam) on behavior and engagement of email users. Since performing randomized experiments in this setting is rife with practical and moral issues, we seek to determine causal relationships using observational data, something that is difficult in many cases. Using a novel modification of a user matching method combined with a time series regression on matched user pairs, we develop a framework for such causal inference that is particularly suited for the spam exposure use case. Using our matching technique, we objectively quantify the effect that continued exposure to spam has on user engagement in Yahoo! Mail. We find that indeed spam exposure leads to significantly, both statistically and economically, lower user engagement. The impact is non-linear; large changes impact users in a progressively more negative fashion. The impact is the strongest on “voluntary” categories of engagement such as composed emails and lowest on “responsive” engagement metrics. Our estimation technique and results not only quantify the negative impact of abuse, but also allow decision makers to estimate potential engagement gains from proposed investments in abuse mitigation.

Available Media

Invited Talk

Grand DEFGH

Robots and Privacy

Speaker: Ryan Calo, Assistant Professor, University of Washington School of Law, and Affiliate Scholar, Stanford Law School Center for Internet and Society

It is not hard to imagine why drones, telepresence, and other robotic technologies raise privacy concerns. Practically by definition, robots are equipped with the ability to sense, process, and record the world around them. Robots can go places humans cannot go, see things humans cannot see. Robots are, first and foremost, a human instrument. And, after industrial manufacturing, the principal use to which we’ve put that instrument has been surveillance. This talk explores the various ways robots implicate privacy and why, absent conscientious legal and design interventions, we may never realize the potential of this transformative technology.

It is not hard to imagine why drones, telepresence, and other robotic technologies raise privacy concerns. Practically by definition, robots are equipped with the ability to sense, process, and record the world around them. Robots can go places humans cannot go, see things humans cannot see. Robots are, first and foremost, a human instrument. And, after industrial manufacturing, the principal use to which we’ve put that instrument has been surveillance. This talk explores the various ways robots implicate privacy and why, absent conscientious legal and design interventions, we may never realize the potential of this transformative technology.

Ryan Calo researches and presents on the intersection of law and technology. His work has appeared in the New York Times, the Associated Press, the Wall Street Journal, and other news outlets. Calo serves on several advisory and program committees, including the Electronic Privacy Information Center, the Future of Privacy Forum, the Mozilla Legal Advisory Board, and National Robotics Week. He also co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence.

Available Media
12:30 p.m.–2:00 p.m. Wednesday
2:00 p.m.–3:30 p.m. Wednesday

CAPTCHAs and Password Strength

Grand ABC

Session Chair: Dan Wallach, Rice University

Security and Usability Challenges of Moving-Object CAPTCHAs: Decoding Codewords in Motion

Y. Xu, University of North Carolina at Chapel Hill; G. Reynaga and S. Chiasson, Carleton University; J.-M. Frahm and F. Monrose, University of North Carolina at Chapel Hill; P. van Oorschot, Carleton University

We explore the robustness and usability of moving-image object recognition (video) captchas, designing and implementing automated attacks based on computer vision techniques. Our approach is suitable for broad classes of moving-image captchas involving rigid objects. We first present an attack that defeats instances of such a captcha (NuCaptcha) representing the state-of-the-art, involving dynamic text strings called codewords. We then consider design modifications to mitigate the attacks (e.g., overlapping characters more closely). We implement the modified captchas and test if designs modified for greater robustness maintain usability. Our lab-based studies show that the modified captchas fail to offer viable usability, even when the captcha strength is reduced below acceptable targets—signaling that the modified designs are not viable. We also implement and test another variant of moving text strings using the known emerging images idea. This variant is resilient to our attacks and also offers similar usability to commercially available approaches. We explain why fundamental elements of the emerging images concept resist our current attack where others fails.

Available Media

How Does Your Password Measure Up? The Effect of Strength Meters on Password Creation

Blase Ur, Patrick Gage Kelley, Saranga Komanduri, Joel Lee, Michael Maass, Michelle L. Mazurek, Timothy Passaro, Richard Shay, Timothy Vidas, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University

To help users create stronger text-based passwords, many web sites have deployed password meters that provide visual feedback on password strength. Although these meters are in wide use, their effects on the security and usability of passwords have not been well studied.

We present a 2,931-subject study of password creation in the presence of 14 password meters. We found that meters with a variety of visual appearances led users to create longer passwords. However, significant increases in resistance to a password-cracking algorithm were only achieved using meters that scored passwords stringently. These stringent meters also led participants to include more digits, symbols, and uppercase letters.

Password meters also affected the act of password creation. Participants who saw stringent meters spent longer creating their password and were more likely to change their password while entering it, yet they were also more likely to find the password meter annoying. However, the most stringent meter and those without visual bars caused participants to place less importance on satisfying the meter. Participants who saw more lenient meters tried to fill the meter and were averse to choosing passwords a meter deemed “bad” or “poor.” Our findings can serve as guidelines for administrators seeking to nudge users towards stronger passwords.

Available Media

I Forgot Your Password: Randomness Attacks Against PHP Applications

George Argyros and Aggelos Kiayias, University of Athens

We provide a number of practical techniques and algorithms for exploiting randomness vulnerabilities in PHP applications.We focus on the predictability of password reset tokens and demonstrate how an attacker can take over user accounts in a web application via predicting or algorithmically derandomizing the PHP core randomness generators. While our techniques are designed for the PHP language, the principles behind our techniques and our algorithms are independent of PHP and can readily apply to any system that utilizes weak randomness generators or low entropy sources. Our results include: algorithms that reduce the entropy of time variables, identifying and exploiting vulnerabilities of the PHP system that enable the recovery or reconstruction of PRNG seeds, an experimental analysis of the Håstad-Shamir framework for breaking truncated linear variables, an optimized online Gaussian solver for large sparse linear systems, and an algorithm for recovering the state of the Mersenne twister generator from any level of truncation. We demonstrate the gravity of our attacks via a number of case studies. Specifically, we show that a number of current widely used web applications can be broken using our techniques including Mediawiki, Joomla, Gallery, osCommerce and others.

Available Media

Invited Talk

Grand DEFGH

Rajesh Patel, Microsoft

3:30 p.m.–4:00 p.m. Wednesday

Break

Grand Ballroom Foyer

4:00 p.m.–5:00 p.m. Wednesday

Browser Security

Grand ABC

Session Chair: Sam King, University of Illinois at Urbana-Champaign

An Evaluation of the Google Chrome Extension Security Architecture

Nicholas Carlini, Adrienne Porter Felt, and David Wagner, University of California, Berkeley

Vulnerabilities in browser extensions put users at risk by providing a way for website and network attackers to gain access to users’ private data and credentials. Extensions can also introduce vulnerabilities into the websites that they modify. In 2009, Google Chrome introduced a new extension platform with several features intended to prevent and mitigate extension vulnerabilities: strong isolation between websites and extensions, privilege separation within an extension, and an extension permission system. We performed a security review of 100 Chrome extensions and found 70 vulnerabilities across 40 extensions. Given these vulnerabilities, we evaluate how well each of the security mechanisms defends against extension vulnerabilities. We find that the mechanisms mostly succeed at preventing web attacks, but new security mechanisms are needed to protect users from network attacks on extensions, website metadata attacks on extensions, and vulnerabilities that extensions add to websites. We propose and evaluate additional defenses, and we conclude that banning HTTP scripts and inline scripts would prevent 47 of the 50 most severe vulnerabilities with only modest impact on developers.

Available Media

Establishing Browser Security Guarantees through Formal Shim Verification

Dongseok Jang, Zachary Tatlock, and Sorin Lerner, University of California, San Diego

Web browsers mediate access to valuable private data in domains ranging from health care to banking. Despite this critical role, attackers routinely exploit browser vulnerabilities to exfiltrate private data and take over the underlying system. We present QUARK, a browser whose kernel has been implemented and verified in Coq. We give a specification of our kernel, show that the implementation satisfies the specification, and finally show that the specification implies several security properties, including tab non-interference, cookie integrity and confidentiality, and address bar integrity.

Available Media

The Brain

Grand DEFGH

Session Chair: David Molnar, Microsoft Research

Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks

Hristo Bojinov, Stanford University; Daniel Sanchez and Paul Reber, Northwestern University; Dan Boneh, Stanford University; Patrick Lincoln, SRI

Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.

Available Media

On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces

Ivan Martinovic, University of Oxford; Doug Davies, Mario Frank, and Daniele Perito, University of California, Berkeley; Tomas Ros, University of Geneva; Dawn Song, University of California, Berkeley

Brain computer interfaces (BCI) are becoming increasingly popular in the gaming and entertainment industries. Consumer-grade BCI devices are available for a few hundred dollars and are used in a variety of applications, such as video games, hands-free keyboards, or as an assistant in relaxation training. There are application stores similar to the ones used for smart phones, where application developers have access to an API to collect data from the BCI devices.

The security risks involved in using consumer-grade BCI devices have never been studied and the impact of malicious software with access to the device is unexplored. We take a first step in studying the security implications of such devices and demonstrate that this upcoming technology could be turned against users to reveal their private and secret information. We use inexpensive electroencephalography (EEG) based BCI devices to test the feasibility of simple, yet effective, attacks. The captured EEG signal could reveal the user’s private informa- tion about, e.g., bank cards, PIN numbers, area of living, the knowledge of the known persons. This is the first attempt to study the security implications of consumer-grade BCI devices. We show that the entropy of the private information is decreased on the average by approximately 15 % - 40 % compared to random guessing attacks.

 

Available Media
5:30 p.m.–7:00 p.m. Wednesday

USENIX Security '12 Symposium Reception

Grand IJK/Grand North Foyer

Don't miss the USENIX Security '12 Symposium Reception, featuring dinner, drinks, and the chance to connect with other attendees, speakers, and conference organizers.

7:00 p.m.–8:30 p.m. Wednesday

USENIX Security '12 Rump Session

Grand DEFGH

We will host a rump session on the evening of Wednesday, August 8, 2012. This is intended as an informal session for short and engaging presentations on recent unpublished results, work in progress, or other topics of interest to the USENIX Security attendees. As in the past, talks do not always need to be serious. To submit a rump session talk, email sec12rump@usenix.org by Wednesday, August 8, 2012, at 11:59 a.m. PDT.

August 9, 2012

9:00 a.m.–10:30 a.m. Thursday

A Chance of Clouds

Grand ABC

Session Chair: Kirill Levchenko, University of California, San Diego

Whispers in the Hyper-space: High-speed Covert Channel Attacks in the Cloud

Zhenyu Wu, Zhang Xu, and Haining Wang, The College of William and Mary

Information security and privacy in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such as cache covert channels) to exfiltrate sensitive information of victims on the same physical system. However, on virtualized x86 systems, covert channel attacks have not yet proven to be practical, and thus the threat is widely considered a “potential risk”. In this paper, we present a novel covert channel attack that is capable of high-bandwidth and reliable data transmission in the cloud. We first study the application of existing cache channel techniques in a virtualized environment, and uncover their major insufficiency and difficulties. We then overcome these obstacles by (1) redesigning a pure timing-based data transmission scheme, and (2) exploiting the memory bus as a high-bandwidth covert channel medium. We further design and implement a robust communication protocol, and demonstrate realistic covert channel attacks on various virtualized x86 systems. Our experiments show that covert channels do pose serious threats to information security in the cloud. Finally, we discuss our insights on covert channel mitigation in virtualized environments.

Available Media

Policy-Sealed Data: A New Abstraction for Building Trusted Cloud Services

Nuno Santos, MPI-SWS; Rodrigo Rodrigues, CITI/Universidade Nova de Lisboa; Krishna P. Gummadi, MPI-SWS; Stefan Saroiu, Microsoft Research 

Accidental or intentional mismanagement of cloud software by administrators poses a serious threat to the integrity and confidentiality of customer data hosted by cloud services. Trusted computing provides an important foundation for designing cloud services that are more resilient to these threats. However, current trusted computing technology is ill-suited to the cloud as it exposes too many internal details of the cloud infrastructure, hinders fault tolerance and load-balancing flexibility, and performs poorly. We present Excalibur, a system that addresses these limitations by enabling the design of trusted cloud services. Excalibur provides a new trusted computing abstraction, called policy-sealed data, that lets data be sealed (i.e., encrypted to a customer-defined policy) and then unsealed (i.e., decrypted) only by nodes whose configurations match the policy. To provide this abstraction, Excalibur uses attribute-based encryption, which reduces the overhead of key management and improves the performance of the distributed protocols employed. To demonstrate that Excalibur is practical, we incorporated it in the Eucalyptus open-source cloud platform. Policy-sealed data can provide greater confidence to Eucalyptus customers that their data is not being mismanaged.

Available Media

STEALTHMEM: System-Level Protection Against Cache-Based Side Channel Attacks in the Cloud

Taesoo Kim, MIT CSAIL; Marcus Peinado and Gloria Mainar-Ruiz, Microsoft Research

Cloud services are rapidly gaining adoption due to the promises of cost efficiency, availability, and on-demand scaling. To achieve these promises, cloud providers share physical resources to support multi-tenancy of cloud platforms. However, the possibility of sharing the same hard- ware with potential attackers makes users reluctant to offload sensitive data into the cloud. Worse yet, researchers have demonstrated side channel attacks via shared memory caches to break full encryption keys of AES, DES, and RSA.

We present STEALTHMEM, a system-level protection mechanism against cache-based side channel attacks in the cloud. STEALTHMEM manages a set of locked cache lines per core, which are never evicted from the cache, and efficiently multiplexes them so that each VM can load its own sensitive data into the locked cache lines. Thus, any VM can hide memory access patterns on confidential data from other VMs. Unlike existing state-of-the-art mitigation methods, STEALTHMEM works with existing commodity hardware and does not require profound changes to application software. We also present a novel idea and prototype for isolating cache lines while fully utilizing memory by exploiting architectural properties of set-associative caches. STEALTHMEM imposes 5.9% of performance overhead on the SPEC 2006 CPU benchmark, and between 2% and 5% overhead on secured AES, DES and Blowfish, requiring only between 3 and 34 lines of code changes from the original implementations.

Available Media

Embedded Security

Grand DEFGH

Session Chair: Srdjan Capkun, ETH

TARDIS: Time and Remanence Decay in SRAM to Implement Secure Protocols on Embedded Devices without Clocks

Amir Rahmati and Mastooreh Salajegheh, University of Massachusetts Amherst; Dan Holcomb, University of California, Berkeley; Jacob Sorber, Dartmouth College; Wayne P. Burleson and Kevin Fu, University of Massachusetts Amherst

Lack of a locally trustworthy clock makes security protocols challenging to implement on batteryless embedded devices such as contact smartcards, contactless smartcards, and RFID tags. A device that knows how much time has elapsed between queries from an untrusted reader could better protect against attacks that depend on the existence of a rate-unlimited encryption oracle.

The TARDIS (Time and Remanence Decay in SRAM) helps locally maintain a sense of time elapsed without power and without special-purpose hardware. The TARDIS software computes the expiration state of a timer by analyzing the decay of existing on-chip SRAM. The TARDIS enables coarse-grained, hourglass-like timers such that cryptographic software can more deliberately decide how to throttle its response rate. Our experiments demonstrate that the TARDIS can measure time ranging from seconds to several hours depending on hardware parameters. Key challenges to implementing a practical TARDIS include compensating for temperature and handling variation across hardware.

Our contributions are (1) the algorithmic building blocks for computing elapsed time from SRAM decay; (2) characterizing TARDIS behavior under different temperatures, capacitors, SRAM sizes, and chips; and (3) three proof-of-concept implementations that use the TARDIS to enable privacy-preserving RFID tags, to deter double swiping of contactless credit cards, and to increase the difficulty of brute-force attacks against e-passports.

Available Media
10:30 a.m.–11:00 a.m. Thursday

Break

Grand Ballroom Foyer

11:00 a.m.–12:30 p.m. Thursday

Secure Computation and PIR

Grand ABC

Session Chair: Nikita Borisov, University of Illinois at Urbana-Champaign

Taking Proof-Based Verified Computation a Few Steps Closer to Practicality

Srinath Setty, Victor Vu, Nikhil Panpalia, Benjamin Braun, Andrew J. Blumberg, and Michael Walfish, The University of Texas at Austin

We describe GINGER, a built system for unconditional, general-purpose, and nearly practical verification of outsourced computation. GINGER is based on PEPPER, which uses the PCP theorem and cryptographic techniques to implement an efficient argument system (a kind of interactive protocol). GINGER slashes the query size and costs via theoretical refinements that are of independent interest; broadens the computational model to include (primitive) floating-point fractions, inequality comparisons, logical operations, and conditional control flow; and includes a parallel GPU-based implementation that dramatically reduces latency.

 

Available Media

Optimally Robust Private Information Retrieval

Casey Devet and Ian Goldberg, University of Waterloo; Nadia Heninger, University of California, San Diego

We give a protocol for multi-server information-theoretic private information retrieval which achieves the theoretical limit for Byzantine robustness. That is, the protocol can allow a client to successfully complete queries and identify server misbehavior in the presence of the maximum possible number of malicious servers. We have implemented our scheme and it is extremely fast in practice: up to thousands of times faster than previous work. We achieve these improvements by using decoding algorithms for error-correcting codes that take advantage of the practical scenario where the client is interested in multiple blocks of the database.

Available Media

Billion-Gate Secure Computation with Malicious Adversaries

Benjamin Kreuter, abhi shelat, and Chih-hao Shen, University of Virginia

The goal of this paper is to assess the feasibility of two-party secure computation in the presence of a malicious adversary. Prior work has shown the feasibility of billion-gate circuits in the semi-honest model, but only the 35k-gate AES circuit in the malicious model, in part because security in the malicious model is much harder to achieve. We show that by incorporating the best known techniques and parallelizing almost all steps of the resulting protocol, evaluating billion-gate circuits is feasible in the malicious model. Our results are in the standard model (i.e., no common reference strings or PKIs) and, in contrast to prior work, we do not use the random oracle model which has well-established theoretical shortcomings.

Available Media

Invited Talk

Grand DEFGH

Cyber Security Education in a Time of Change

Speaker: Cynthia E. Irvine, Naval Postgraduate School, Monterey, CA

The call is out and it is vast. Across the board, personnel are being sought for the cyber security workforce. Academe is faced with the tremendous challenge of meeting this demand. Yet there are many questions, among them: Is cybersecurity an academic subject or an art form requiring apprenticeship? Where will we find students in the face of the dearth of STEM students? How will we attract them to the field? How do we attract underrepresented groups?

The call is out and it is vast. Across the board, personnel are being sought for the cyber security workforce. Academe is faced with the tremendous challenge of meeting this demand. Yet there are many questions, among them: Is cybersecurity an academic subject or an art form requiring apprenticeship? Where will we find students in the face of the dearth of STEM students? How will we attract them to the field? How do we attract underrepresented groups? Can programs balance training and education? Do we teach constructive or reactive security? How important are labs? This talk will summarize our progress and explore prospects in cyber security education.

Cynthia E. Irvine is the Chair of the Cyber Academic Group at the Naval Postgraduate School, one of four National Security Agency designated Centers of Academic Excellence in Cyber Operations. She is Director of the Center for Information Systems Security Studies and Research (CISR) and is a Professor of Computer Science. Her research centers on high assurance developmental and multilevel security. She is an author on over 160 papers and reports and has supervised the research of over 140 Master's and PhD students. Dr. Irvine is a recipient of the Naval Information Assurance Award and the William Hugh Murray Founder’s Award from the Colloquium for Information Systems Security Education.

Available Media
12:30 p.m.–2:00 p.m. Thursday
2:00 p.m.–3:30 p.m. Thursday

Authentication and Secure Deletion

Grand ABC

Session Chair: Roxana Geambasu, Columbia University

 

Progressive Authentication: Deciding When to Authenticate on Mobile Phones

Oriana Riva, Microsoft Research; Chuan Qin, University of South Carolina; Karin Strauss and Dimitrios Lymberopoulos, Microsoft Research

Mobile users are often faced with a trade-off between security and convenience. Either users do not use any security lock and risk compromising their data, or they use security locks but then have to inconveniently authenticate every time they use the device. Rather than exploring a new authentication scheme, we address the problem of deciding when to surface authentication and for which applications. We believe reducing the number of times a user is requested to authenticate lowers the barrier of entry for users who currently do not use any security. Progressive authentication, the approach we propose, combines multiple signals (biometric, continuity, possession) to determine a level of confidence in a user’s authenticity. Based on this confidence level and the degree of protection the user has configured for his applications, the system determines whether access to them requires authentication. We built a prototype running on modern phones to demonstrate progressive authentication and used it in a lab study with nine users. Compared to the state-of-the-art, the system is able to reduce the number of required authentications by 42% and still provide acceptable security guarantees, thus representing an attractive solution for users who do not use any security mechanism on their devices.

Available Media

Origin-Bound Certificates: A Fresh Approach to Strong Client Authentication for the Web

Michael Dietz, Rice University; Alexei Czeskis, University of Washington; Dirk Balfanz, Google Inc.; Dan S. Wallach, Rice University

Client authentication on the web has remained in the internet-equivalent of the stone ages for the last two decades. Instead of adopting modern public-key-based authentication mechanisms, we seem to be stuck with passwords and cookies.

In this paper, we propose to break this stalemate by presenting a fresh approach to public-key-based client authentication on the web. We describe a simple TLS extension that allows clients to establish strong authenticated channels with servers and to bind existing authentication tokens like HTTP cookies to such channels. This allows much of the existing infrastructure of the web to remain unchanged, while at the same time strengthening client authentication considerably against a wide range of attacks.

We implemented our system in Google Chrome and Google’s web serving infrastructure, and provide a performance evaluation of this implementation.

 

Available Media

Data Node Encrypted File System: Efficient Secure Deletion for Flash Memory

Joel Reardon, Srdjan Capkun, and David Basin, ETH Zurich

We propose the Data Node Encrypted File System (DNEFS), which uses on-the-fly encryption and decryption of file system data nodes to efficiently and securely delete data on flash memory systems. DNEFS is a generic modification of existing flash file systems or controllers that enables secure data deletion while preserving the underlying systems’ desirable properties: application-independence, fine-grained data access, wear-levelling, and efficiency.

We describe DNEFS both abstractly and in the context of the flash file system UBIFS. We propose UBIFSec, which integrates DNEFS into UBIFS. We implement UBIFSec by extending UBIFS’s Linux implementation and we integrate UBIFSec in the Android operating system running on a Google Nexus One smartphone. We show that it is efficient and usable; Android OS and applications (including video and audio playback) run normally on top of UBIFSec. To the best of our knowledge, this work presents the first comprehensive and fully-implemented secure deletion solution that works within the specification of flash memory.

Available Media

Invited Talk

Grand DEFGH

Life as a Target—Welcome to the Club

Speaker: Dickie George, Johns Hopkins Applied Physics Laboratory

As a cryptographer for 40 years, Mr. George recognized that he was a target of foreign intelligence, and lived under those conditions. As the world has changed—communications systems, the Internet, online life (banking, shopping, social life)—the set of targets, the type of information that is sought, and the adversary have all changed significantly. We’ll discuss those changes, how they impact us all, and how today’s cyber professionals need to address this threat to protect government, industry, and individuals.

As a cryptographer for 40 years, Mr. George recognized that he was a target of foreign intelligence, and lived under those conditions. As the world has changed—communications systems, the Internet, online life (banking, shopping, social life)—the set of targets, the type of information that is sought, and the adversary have all changed significantly. We’ll discuss those changes, how they impact us all, and how today’s cyber professionals need to address this threat to protect government, industry, and individuals.

Dickie George currently serves as the Senior Advisor for Cyber Security at the Johns Hopkins University Applied Physics Laboratory. He provides security advice to leadership and the technical workforce, and currently works on a number of projects in support of the U.S. government. He joined the Lab in 2012, following a 41-year career at NSA, the last 8 years of which he served as Technical Director for the Information Assurance Directorate. At NSA, Dickie was a cryptomathematician and author of over 125 cryptanalytic papers.

Available Media
3:30 p.m.–4:00 p.m. Thursday

Break

Grand Ballroom Foyer

4:00 p.m.–5:30 p.m. Thursday

Privacy Enhancing Technologies and Network Traffic Analysis

Grand ABC

Session Chair: Ian Goldberg, University of Waterloo

Throttling Tor Bandwidth Parasites

Rob Jansen and Paul Syverson, U.S. Naval Research Laboratory; Nicholas Hopper, University of Minnesota

Tor is vulnerable to network congestion and performance problems due to bulk data transfers. A large fraction of the available network capacity is consumed by a small percentage of Tor users, resulting in severe service degra- dation for the majority. Bulk users continuously drain relays of excess bandwidth, creating new network bottlenecks and exacerbating the effects of existing ones. While this problem may currently be attributed to rational users utilizing the network, it may also be exploited by a relatively low-resource adversary using similar techniques to contribute to a network denial of service (DoS) attack. Degraded service discourages the use of Tor, af- fecting both Tor’s client diversity and anonymity.

Equipped with mechanisms from communication networks, we design and implement three Tor-specific algorithms that throttle bulk transfers to reduce network congestion and increase network responsiveness. Unlike existing techniques, our algorithms adapt to network dynamics using only information local to a relay. We experiment with full-network deployments of our algorithms under a range of light to heavy network loads. We find that throttling results in significant improvements to web client performance while mitigating the negative effects of bulk transfers. We also analyze how throttling affects anonymity and compare the security of our algorithms under adversarial attack. We find that throttling reduces information leakage compared to unthrottled Tor while improving anonymity against realistic adversaries.

 

Available Media

Chimera: A Declarative Language for Streaming Network Traffic Analysis

Kevin Borders, National Security Agency; Jonathan Springer, Reservoir Labs; Matthew Burnside, National Security Agency

Intrusion detection systems play a vital role in network security. Central to these systems is the language used to express policies. Ideally, this language should be powerful, implementation-agnostic, and cross-platform. Unfortunately, today’s popular intrusion detection systems fall short of this goal. Each has their own policy language in which expressing complicated logic requires implementation-specific code. Database systems have adapted SQL to handle streaming data, but have yet to achieve the efficiency and flexibility required for complex intrusion detection tasks.

In this paper, we introduce Chimera, a declarative query language for network traffic processing that bridges the gap between powerful intrusion detection systems and a simple, platform-independent SQL syntax. Chimera extends streaming SQL languages to better handle network traffic by adding structured data types, first-class functions, and dynamic window boundaries. We show how these constructs can be applied to real-world scenarios, such as side-jacking detection and DNS feature extraction. Finally, we describe the implementation and evaluation of a compiler that translates Chimera queries into low-level code for the Bro event language.

Available Media

New Attacks on Timing-based Network Flow Watermarks

Zi Lin and Nicholas Hopper, University of Minnesota

 

A network flow watermarking scheme attempts to manipulate the statistical properties of a flow of packets to insert a “mark” making it easier to detect the flow after passing through one or more relay hosts. Because an attacker that is willing to tolerate delay can (nearly) always eliminate such marks, recent schemes have concentrated on making the marks “invisible” so that a passive attacker cannot detect the presence of the mark. In this work, we argue that from a system’s perspective, security against passive detection is insufficient for successful traffic analysis. We introduce a stronger, but feasible attack model (a known/chosen flow attacker) and a second security goal (security against copy attacks) and argue that security against both of these attacks is required for successful traffic analysis. We also demonstrate successful attacks against two recent watermarking schemes, RAINBOW and SWIRL, and show how considering these stronger attacks can aid in the design of passive detection attacks against each as well.

Available Media

Invited Talk

Grand DEFGH

Nature is a Language; Can't You Read?

Speaker: Meredith L. Patterson, DYIBiologist and Senior Research Scientist, Red Lambda

 

Thirty years ago, almost no one could have imagined that computers would one day be so commonplace that enough CPU power to launch an Apollo mission would fit easily into a trouser pocket—except for the handful of visionaries who put Silicon Valley on the map, working from their garages to bring about the dream of personal computing. Garage engineering is still alive and well today, but its boundaries have expanded from silicon to carbon. Across the globe, hundreds of curious hackers have taken up the challenge of bringing bioengineering out of the ivory tower and into the hands of citizen scientists. From "weather maps" that chart the biodiversity of microbes, to open-source lab equipment, to crowdsourced reagent exchanges, all the way to novel techniques for genetically modifying organisms, DIYbiologists are rapidly redefining the landscape of biological research.

Thirty years ago, almost no one could have imagined that computers would one day be so commonplace that enough CPU power to launch an Apollo mission would fit easily into a trouser pocket—except for the handful of visionaries who put Silicon Valley on the map, working from their garages to bring about the dream of personal computing. Garage engineering is still alive and well today, but its boundaries have expanded from silicon to carbon. Across the globe, hundreds of curious hackers have taken up the challenge of bringing bioengineering out of the ivory tower and into the hands of citizen scientists. From "weather maps" that chart the biodiversity of microbes, to open-source lab equipment, to crowdsourced reagent exchanges, all the way to novel techniques for genetically modifying organisms, DIYbiologists are rapidly redefining the landscape of biological research. In this talk, we'll explore how the DIYbio community emerged, its current state of the art, its near-term goals, and its impact on our understanding of the chemical language that encodes the living world around us.

Available Media
6:00 p.m.–7:30 p.m. Thursday

Poster Session and Happy Hour

Grand North Foyer

The poster session is the perfect venue for new or ongoing work. Poster presenters will have the entirety of the evening happy hour to discuss their work, get exposure, and receive feedback from attendees.

August 10, 2012

9:00 a.m.–10:30 a.m. Friday

Web Security

Grand ABC

Session Chair: Christian Kreibich, International Computer Science Institute

On Breaking SAML: Be Whoever You Want to Be

Juraj Somorovsky, Ruhr-University Bochum; Andreas Mayer, Adolf Würth GmbH & Co. KG; Jörg Schwenk, Marco Kampmann, and Meiko Jensen, Ruhr-University Bochum

The Security Assertion Markup Language (SAML) is a widely adopted language for making security statements about subjects. It is a critical component for the development of federated identity deployments and Single Sign-On scenarios. In order to protect integrity and authenticity of the exchanged SAML assertions, the XML Signature standard is applied. However, the signature verification algorithm is much more complex than in traditional signature formats like PKCS#7. The integrity protection can thus be successfully circumvented by application of different XML Signature specific attacks, under a weak adversarial model.

In this paper we describe an in-depth analysis of 14 major SAML frameworks and show that 11 of them, including Salesforce, Shibboleth, and IBM XS40, have critical XML Signature wrapping (XSW) vulnerabilities. Based on our analysis, we developed an automated penetration testing tool for XSW in SAML frameworks. Its feasibility was proven by additional discovery of a new XSW variant. We propose the first framework to analyze such attacks, which is based on the information flow between two components of the Relying Party. Surprisingly, this analysis also yields efficient and practical countermeasures.

Available Media

Clickjacking: Attacks and Defenses

Lin-Shung Huang, Carnegie Mellon University; Alex Moshchuk, Helen J. Wang, and Stuart Schechter, Microsoft Research; Collin Jackson, Carnegie Mellon University

Clickjacking attacks are an emerging threat on the web. In this paper, we design new clickjacking attack variants using existing techniques and demonstrate that existing clickjacking defenses are insufficient. Our attacks show that clickjacking can cause severe damages, including compromising a user’s private webcam, email or other private data, and web surfing anonymity.

We observe the root cause of clickjacking is that an attacker application presents a sensitive UI element of a target application out of context to a user (such as hiding the sensitive UI by making it transparent), and hence the user is tricked to act out of context. To address this root cause, we propose a new defense, InContext, in which web sites (or applications) mark UI elements that are sensitive, and browsers (or OSes) enforce context integrity of user actions on these sensitive UI elements, ensuring that a user sees everything she should see before her ac- tion and that the timing of the action corresponds to her intent.

We have conducted user studies on Amazon Mechanical Turk with 2064 participants to evaluate the effectiveness of our attacks and our defense. We show that our attacks have success rates ranging from 43% to 98%, and our InContext defense can be very effective against the clickjacking attacks in which the use of clickjacking is more effective than social engineering.

Available Media

Privilege Separation in HTML5 Applications

Devdatta Akhawe, Prateek Saxena, and Dawn Song, University of California, Berkeley

The standard approach for privilege separation in web applications is to execute application components in different web origins. This limits the practicality of privilege separation since each web origin has finan- cial and administrative cost. In this paper, we propose a new design for achieving effective privilege separation in HTML5 applications that shows how applications can cheaply create arbitrary number of components. Our approach utilizes standardized abstractions already implemented in modern browsers. We do not advocate any changes to the underlying browser or require learning new high-level languages, which contrasts prior approaches. We empirically show that we can retrofit our design to real-world HTML5 applica- tions (browser extensions and rich client-side applications) and achieve reduction of 6x to 10000x in TCB for our case studies. Our mechanism requires less than 13 lines of application-specific code changes and considerably improves auditability.

 

Available Media

Software Security I

Grand DEFGH

Session Chair: Mihai Christodorescu, IBM T.J. Watson Research Center

Fuzzing with Code Fragments

Christian Holler, Mozilla Corporation; Kim Herzig and Andreas Zeller, Saarland University

Fuzz testing is an automated technique providing random data as input to a software system in the hope to expose a vulnerability. In order to be effective, the fuzzed input must be common enough to pass elementary consistency checks; a JavaScript interpreter, for instance, would only accept a semantically valid program. On the other hand, the fuzzed input must be uncommon enough to trigger exceptional behavior, such as a crash of the interpreter. The LangFuzz approach resolves this conflict by using a grammar to randomly generate valid programs; the code fragments, however, partially stem from programs known to have caused invalid behavior before. LangFuzz is an effective tool for security testing: Applied on the Mozilla JavaScript interpreter, it discovered a total of 105 new severe vulnerabilities within three months of operation (and thus became one of the top security bug bounty collectors within this period); applied on the PHP interpreter, it discovered 18 new defects causing crashes.

Available Media

kGuard: Lightweight Kernel Protection against Return-to-User Attacks

Vasileios P. Kemerlis, Georgios Portokalidis, and Angelos D. Keromytis, Columbia University

Return-to-user (ret2usr) attacks exploit the operating system kernel, enabling local users to hijack privileged execution paths and execute arbitrary code with elevated privileges. Current defenses have proven to be inadequate, as they have been repeatedly circumvented, incur considerable overhead, or rely on extended hyperv sors and special hardware features. We present kGuard, a compiler plugin that augments the kernel with compact inline guards, which prevent ret2usr with low performance and space overhead. kGuard can be used with any operating system that features a weak separation between kernel and user space, requires no modifications to the OS, and is applicable to both 32- and 64-bit architectures. Our evaluation demonstrates that Linux kernels compiled with kGuard become impervious to a variety of control-flow hijacking exploits. kGuard exhibits lower overhead than previous work, imposing on average an overhead of 11.4% on system call and I/O latency on x86 OSs, and 10.3% on x86-64. The size of a kGuard-protected kernel grows between 3.5% and 5.6%, due to the inserted checks, while the impact on real-life applications is minimal (≤1%).

 

Available Media

Enhanced Operating System Security Through Efficient and Fine-grained Address Space Randomization

Cristiano Giuffrida, Anton Kuijsten, and Andrew S. Tanenbaum, Vrije Universiteit Amsterdam

In recent years, the deployment of many application-level countermeasures against memory errors and the increasing number of vulnerabilities discovered in the kernel has fostered a renewed interest in kernel-level exploitation. Unfortunately, no comprehensive and well-established mechanism exists to protect the operating system from arbitrary attacks, due to the relatively new development of the area and the challenges involved.

In this paper, we propose the first design for fine-grained address space randomization (ASR) inside the operating system (OS), providing an efficient and comprehensive countermeasure against classic and emerging attacks, such as return-oriented programming. To motivate our design, we investigate the differences with application-level ASR and find that some of the well-established assumptions in existing solutions are no longer valid inside the OS; above all, perhaps, that information leakage becomes a major concern in the new context. We show that our ASR strategy outperforms state-of-the-art solutions in terms of both performance and security without affecting the software distribution model. Finally, we present the first comprehensive live rerandomization strategy, which we found to be particularly important inside the OS. Experimental results demonstrate that our techniques yield low run-time performance overhead (less than 5% on average on both SPEC and syscall-intensive benchmarks) and limited run-time memory footprint increase (around 15% during the execution of our benchmarks). We believe our techniques can greatly enhance the level of OS security without compromising the performance and reliability of the OS.

Available Media
10:30 a.m.–11:00 a.m. Friday

Break

Grand Ballroom Foyer

11:00 a.m.–12:30 p.m. Friday

Botnets and Web Security

Grand ABC

Session Chair: Niels Provos, Google

From Throw-Away Traffic to Bots: Detecting the Rise of DGA-Based Malware

Manos Antonakakis, Damballa Inc. and Georgia Institute of Technology; Roberto Perdisci, University of Georgia and Georgia Institute of Technology; Yacin Nadji, Georgia Institute of Technology; Nikolaos Vasiloglou and Saeed Abu-Nimeh, Damballa Inc.; Wenke Lee and David Dagon, Georgia Institute of Technology

Many botnet detection systems employ a blacklist of known command and control (C&C) domains to detect bots and block their traffic. Similar to signature-based virus detection, such a botnet detection approach is static because the blacklist is updated only after running an external (and often manual) process of domain discovery. As a response, botmasters have begun employing domain generation algorithms (DGAs) to dynamically produce a large number of random domain names and select a small subset for actual C&C use. That is, a C&C domain is randomly generated and used for a very short period of time, thus rendering detection approaches that rely on static domain lists ineffective. Naturally, if we know how a domain generation algorithm works, we can generate the domains ahead of time and still identify and block botnet C&C traffic. The existing solutions are largely based on reverse engineering of the bot malware executables, which is not always feasible.

In this paper we present a new technique to detect randomly generated domains without reversing. Our insight is that most of the DGA-generated (random) domains that a bot queries would result in Non-Existent Domain (NXDomain) responses, and that bots from the same botnet (with the same DGA algorithm) would generate similar NXDomain traffic. Our approach uses a combination of clustering and classification algorithms. The clustering algorithm clusters domains based on the similarity in the make-ups of domain names as well as the groups of machines that queried these domains. The classification algorithm is used to assign the generated clusters to models of known DGAs. If a cluster cannot be assigned to a known model, then a new model is produced, indicating a new DGA variant or family. We implemented a prototype system and evaluated it on real-world DNS traffic obtained from large ISPs in North America. We report the discovery of twelve DGAs. Half of them are variants of known (botnet) DGAs, and the other half are brand new DGAs that have never been reported before.

Available Media

PUBCRAWL: Protecting Users and Businesses from CRAWLers

Gregoire Jacob, University of California, Santa Barbara/Telecom SudParis; Engin Kirda, Northeastern University; Christopher Kruegel and Giovanni Vigna, University of California, Santa Barbara

Web crawlers are automated tools that browse the web to retrieve and analyze information. Although crawlers are useful tools that help users to find content on the web, they may also be malicious. Unfortunately, unauthorized (malicious) crawlers are increasingly becoming a threat for service providers because they typically collect information that attackers can abuse for spamming, phishing, or targeted attacks. In particular, social networking sites are frequent targets of malicious crawling, and there were recent cases of scraped data sold on the black market and used for blackmailing.

In this paper, we introduce PUBCRAWL, a novel approach for the detection and containment of crawlers. Our detection is based on the observation that crawler traffic significantly differs from user traffic, even when many users are hidden behind a single proxy. Moreover, we present the first technique for crawler campaign attribution that discovers synchronized traffic coming from multiple hosts. Finally, we introduce a containment strategy that leverages our detection results to efficiently block crawlers while minimizing the impact on legitimate users. Our experimental results in a large, well-known social networking site (receiving tens of millions of requests per day) demonstrate that PUBCRAWL can distinguish between crawlers and users with high accuracy. We have completed our technology transfer, and the social networking site is currently running PUBCRAWL in production.

 

Available Media

Enemy of the State: A State-Aware Black-Box Web Vulnerability Scanner

Adam Doupé, Ludovico Cavedon, Christopher Kruegel, and Giovanni Vigna, University of California, Santa Barbara

Black-box web vulnerability scanners are a popular choice for finding security vulnerabilities in web applications in an automated fashion. These tools operate in a point-and-shoot manner, testing any web application—regardless of the server-side language—for common security vulnerabilities. Unfortunately, black-box tools suffer from a number of limitations, particularly when interacting with complex applications that have multiple actions that can change the application’s state. If a vulnerability analysis tool does not take into account changes in the web application’s state, it might overlook vulnerabilities or completely miss entire portions of the web application.

We propose a novel way of inferring the web application’s internal state machine from the outside—that is, by navigating through the web application, observing differences in output, and incrementally producing a model representing the web application’s state.

We utilize the inferred state machine to drive a black-box web application vulnerability scanner. Our scanner traverses a web application’s state machine to find and fuzz user-input vectors and discover security flaws. We implemented our technique in a prototype crawler and linked it to the fuzzing component from an open-source web vulnerability scanner.

We show that our state-aware black-box web vulnerability scanner is able to not only exercise more code of the web application, but also discover vulnerabilities that other vulnerability scanners miss.

Available Media

Invited Talk

Grand DEFGH

Emerging Trends and Their Security Implications

Speaker: Doug Burger, Director, Client and Cloud Applications, Microsoft

We are living in an era of accelerating change.  We expect rapid, imminent changes in user interfaces, personal digital services, and the hardware ecosystem. All of these changes will have significant implications for security.   These changes include tighter integration of computing with human senses, rapid increases in machines' understanding of the real world, the use of personal data to provide highly individualized services, and a massive change in the hardware ecosystem as we near the end of Moore's Law.  I will describe these changes in more detail, and speculate on some of the possible security implications.

We are living in an era of accelerating change.  We expect rapid, imminent changes in user interfaces, personal digital services, and the hardware ecosystem. All of these changes will have significant implications for security.   These changes include tighter integration of computing with human senses, rapid increases in machines' understanding of the real world, the use of personal data to provide highly individualized services, and a massive change in the hardware ecosystem as we near the end of Moore's Law.  I will describe these changes in more detail, and speculate on some of the possible security implications.

Available Media
12:30 p.m.–2:00 p.m. Friday
2:00 p.m.–3:30 p.m. Friday

Mobile Devices

Grand ABC

Session Chair: Will Enck, North Carolina State University

Aurasium: Practical Policy Enforcement for Android Applications

Rubin Xu, Computer Laboratory, University of Cambridge; Hassen Saïdi, Computer Science Laboratory, SRI International; Ross Anderson, Computer Laboratory, University of Cambridge

The increasing popularity of Google’s mobile platform Android makes it the prime target of the latest surge in mobile malware. Most research on enhancing the platform’s security and privacy controls requires extensive modification to the operating system, which has significant usability issues and hinders efforts for widespread adoption. We develop a novel solution called Aurasium that bypasses the need to modify the Android OS while providing much of the security and privacy that users desire. We automatically repackage arbitrary applications to attach user-level sandboxing and policy enforcement code, which closely watches the application’s behavior for security and privacy violations such as attempts to retrieve a user’s sensitive information, send SMS covertly to premium numbers, or access malicious IP addresses. Aurasium can also detect and prevent cases of privilege escalation attacks. Experiments show that we can apply this solution to a large sample of benign and malicious applications with a near 100 percent success rate, without significant performance and space overhead. Aurasium has been tested on three versions of the Android OS, and is freely available.

 

Available Media

AdSplit: Separating Smartphone Advertising from Applications

Shashi Shekhar, Michael Dietz, and Dan S. Wallach, Rice University

A wide variety of smartphone applications today rely on third-party advertising services, which provide libraries that are linked into the hosting application. This situation is undesirable for both the application author and the advertiser. Advertising libraries require their own permissions, resulting in additional permission requests to users. Likewise, a malicious application could simulate the behavior of the advertising library, forging the user’s interaction and stealing money from the advertiser. This paper describes AdSplit, where we extended Android to allow an application and its advertising to run as separate processes, under separate user-ids, eliminating the need for applications to request permissions on behalf of their advertising libraries, and providing services to validate the legitimacy of clicks, locally and remotely. AdSplit automatically recompiles apps to extract their ad services, and we measure minimal runtime overhead. AdSplit also supports a system resource that allows advertisements to display their content in an embedded HTML widget, without requiring any native code.

Available Media

DroidScope: Seamlessly Reconstructing the OS and Dalvik Semantic Views for Dynamic Android Malware Analysis

Lok Kwong Yan, Syracuse University and Air Force Research Laboratory; Heng Yin, Syracuse University

The prevalence of mobile platforms, the large market share of Android, plus the openness of the Android Market makes it a hot target for malware attacks. Once a malware sample has been identified, it is critical to quickly reveal its malicious intent and inner workings. In this paper we present DroidScope, an Android analysis platform that continues the tradition of virtualization-based malware analysis. Unlike current desktop malware analysis platforms, DroidScope reconstructs both the OS-level and Java-level semantics simultaneously and seamlessly. To facilitate custom analysis, DroidScope exports three tiered APIs that mirror the three levels of an Android device: hardware, OS and Dalvik Virtual Machine. On top of DroidScope, we further developed several analysis tools to collect detailed native and Dalvik instruction traces, profile API-level activity, and track information leakage through both the Java and native components using taint analysis. These tools have proven to be effective in analyzing real world malware samples and incur reasonably low performance overheads.

Available Media

Software Security II

Grand DEFGH

Session Chair: Alex Moshchuk, Microsoft Research

STING: Finding Name Resolution Vulnerabilities in Programs

Hayawardh Vijayakumar, Joshua Schiffman, and Trent Jaeger, The Pennsylvania State University

The process of name resolution, where names are resolved into resource references, is fundamental to computer science, but its use has resulted in several classes of vulnerabilities. These vulnerabilities are difficult for programmers to eliminate because their cause is external to the program: the adversary changes namespace bindings in the system to redirect victim programs to a resource of the adversary’s choosing. Researchers have also found that these attacks are very difficult to prevent systematically. Any successful defense must have both knowledge about the system namespace and the program intent to eradicate such attacks. As a result, finding and fixing program vulnerabilities to such as attacks is our best defense. In this paper, we propose the STING test engine, which finds name resolution vulnerabilities in programs by performing a dynamic analysis of name resolution processing to produce directed test cases whenever an attack may be possible. The key insight is that such name resolution attacks are possible whenever an adversary has write access to a directory shared with the victim, so STING automatically identifies when such directories will be accessed in name resolution to produce test cases that are likely to indicate a true vulnerability if undefended. Using STING, we found 21 previously-unknown vulnerabilities in a variety of Linux programs on Ubuntu and Fedora systems, demonstrating that comprehensive testing for name resolution vulnerabilities is practical.

Available Media

Tracking Rootkit Footprints with a Practical Memory Analysis System

Weidong Cui and Marcus Peinado, Microsoft Research; Zhilei Xu, Massachusetts Institute of Technology; Ellick Chan, University of Illinois at Urbana-Champaign

In this paper, we present MAS, a practical memory analysis system for identifying a kernel rootkit’s memory footprint in an infected system. We also present two large-scale studies of applying MAS to 848 real-world Windows kernel crash dumps and 154,768 potential malware samples.

Error propagation and invalid pointers are two key challenges that stop previous pointer-based memory traversal solutions from effectively and efficiently analyzing real-world systems. MAS uses a new memory traversal algorithm to support error correction and stop error propagation. Our enhanced static analysis allows the MAS memory traversal to avoid error-prone operations and provides it with a reliable partial type assignment.

Our experiments show that MAS was able to analyze all memory snapshots quickly with typical running times between 30 and 160 seconds per snapshot and with near perfect accuracy. Our kernel malware study observes that the malware samples we tested hooked 191 different function pointers in 31 different data structures. With MAS, we were able to determine quickly that 95 out of the 848 crash dumps contained kernel rootkits.

 

Available Media

Tachyon: Tandem Execution for Efficient Live Patch Testing

Matthew Maurer and David Brumley, Carnegie Mellon University

The vast number of security incidents are caused by exploits against vulnerabilities for which a patch is already available, but that users simply did not install. Patch installation is often delayed because patches must be tested manually to make sure they do not introduce problems, especially at the enterprise level.

In this paper we propose a new tandem execution approach for automated patch testing. Our approach is based on a patch execution consistency model which maintains that a patch is safe to apply if the executions of the pre and post-patch program only differ on attack inputs. Tandem execution runs both pre and postpatch programs simultaneously in order to check for execution consistency. We have implemented our techniques in TACHYON, a system for online patch testing in Linux. TACHYON is able to automatically check and verify patches without source access.

 

Available Media
3:30 p.m.–4:00 p.m. Friday

Break

Grand Ballroom Foyer

4:00 p.m.–5:30 p.m. Friday

Being Social

Grand ABC

Session Chair: Jaeyeon Jung, Microsoft Research

Privacy-Preserving Social Plugins

Georgios Kontaxis, Michalis Polychronakis, and Angelos D. Keromytis, Columbia University; Evangelos P. Markatos, FORTH-ICS

The widespread adoption of social plugins, such as Facebook’s Like and Google’s +1 buttons, has raised concerns about their implications to user privacy, as they enable social networking services to track a growing part of their members’ browsing activity. Existing mitigations in the form of browser extensions can prevent social plugins from tracking user visits, but inevitably disable any kind of content personalization, ruining the user experience.

In this paper we propose a novel design for privacy-preserving social plugins that decouples the retrieval of user-specific content from the loading of a social plugin. In contrast to existing solutions, this design preserves the functionality of existing social plugins by delivering the same personalized content, while it protects user privacy by avoiding the transmission of user-identifying information at load time. We have implemented our design in SafeButton, an add-on for Firefox that fully supports seven out of the nine social plugins currently provided by Facebook, including the Like button, and partially due to API restrictions the other two. As privacy-preserving social plugins maintain the functionality of existing social plugins, we envisage that they could be adopted by social networking services themselves for the benefit of their members. To that end, we also present a pure JavaScript design that can be offered transparently as a service without the need to install any browser add-ons.

Available Media

Social Networking with Frientegrity: Privacy and Integrity with an Untrusted Provider

Ariel J. Feldman, Aaron Blankstein, Michael J. Freedman, and Edward W. Felten, Princeton University
Awarded Best Student Paper!  

Today’s social networking services require users to trust the service provider with the confidentiality and integrity of their data. But with their history of data leaks and privacy controversies, these services are not always deserving of this trust. Indeed, a malicious provider could not only violate users’ privacy, it could equivocate and show different users divergent views of the system’s state. Such misbehavior can lead to numerous harms including surreptitious censorship.

In light of these threats, this paper presents Frientegrity, a framework for social networking applications that can be realized with an untrusted service provider. In Frientegrity, a provider observes only encrypted data and cannot deviate from correct execution without being detected. Prior secure social networking systems have either been decentralized, sacrificing the availability and convenience of a centralized provider, or have focused almost entirely on users’ privacy while ignoring the threat of equivocation. On the other hand, existing systems that are robust to equivocation do not scale to the needs social networking applications in which users may have hundreds of friends, and in which users are mainly interested the latest updates, not in the thousands that may have come before.

To address these challenges, we present a novel method for detecting provider equivocation in which clients collaborate to verify correctness. In addition, we introduce an access control mechanism that offers efficient revocation and scales logarithmically with the number of friends. We present a prototype implementation demonstrating that Frientegrity provides latency and throughput that meet the needs of a realistic workload.

Available Media

Efficient and Scalable Socware Detection in Online Social Networks

Md Sazzadur Rahman, Ting-Kai Huang, Harsha V. Madhyastha, and Michalis Faloutsos, University of California, Riverside

Online social networks (OSNs) have become the new vector for cybercrime, and hackers are finding new ways to propagate spam and malware on these platforms, which we refer to as socware. As we show here, socware cannot be identified with existing security mechanisms (e.g., URL blacklists), because it exploits different weaknesses and often has different intentions.

In this paper, we present MyPageKeeper, a Facebook application that we have developed to protect Facebook users from socware. Here, we present results from the perspective of over 12K users who have installed MyPageKeeper and their roughly 2.4 million friends. Our work makes three main contributions. First, to enable protection of users at scale, we design an efficient socware detection method which takes advantage of the social context of posts. We find that our classifier is both accurate (97% of posts flagged by it are indeed socware and it incorrectly flags only 0.005% of benign posts) and efficient (it requires 46 ms on average to classify a post). Second, we show that socware significantly differs from traditional email spam or web-based malware. For example, website blacklists identify only 3% of the posts flagged by MyPageKeeper, while 26% of flagged posts point to malicious apps and pages hosted on Facebook (which no current antivirus or blacklist is designed to detect). Third, we quantify the prevalence of socware by analyzing roughly 40 million posts over four months; 49% of our users were exposed to at least one socware post in this period. Finally, we identify a new type of parasitic behavior, which we refer to as “Like-as-a-Service”, whose goal is to artificially boost the number of “Likes” of a Facebook page.

 

Available Media

Invited Talk

Grand DEFGH

Securing Early Software Development

Speaker: Riley Eller (Caezar), Security Strategist for Leviathan Security Group

In this talk, I will present a trusted advisor business model for smaller security firms, especially those with a handful of extremely strong contributors plus a larger staff. The model is novel and has been successfully adapted for a series of Seattle-area start-up firms plus one international hospitality brand. Benefits to the client firm, investors, and customers include improvements to product reliability, public relations, and frequency of emergency funding appeals. Benefits to the consultants are equally attractive: stable revenue, low-volume-high-skill work, high-volume-low-skill work, and a teachable system for converting technical expertise into recurring sales. By formalizing what is already often true, the trusted security advisor becomes a specific and profitable job description. I present the idea, the pitch, and then a fast-forward client onboarding process to help the audience seed their own inventive methods for selling security services to software developers.

In this talk, I will present a trusted advisor business model for smaller security firms, especially those with a handful of extremely strong contributors plus a larger staff. The model is novel and has been successfully adapted for a series of Seattle-area start-up firms plus one international hospitality brand. Benefits to the client firm, investors, and customers include improvements to product reliability, public relations, and frequency of emergency funding appeals. Benefits to the consultants are equally attractive: stable revenue, low-volume-high-skill work, high-volume-low-skill work, and a teachable system for converting technical expertise into recurring sales. By formalizing what is already often true, the trusted security advisor becomes a specific and profitable job description. I present the idea, the pitch, and then a fast-forward client onboarding process to help the audience seed their own inventive methods for selling security services to software developers.

Mr. Eller’s passion for computing began with early home machines and exploded in the vibrant bulletin board (BBS) days of the late 1980s. He grew up in the emerging network security community, learning from hackers across the globe. As an adult, he became a software engineer working on systems software, wireless routing protocols, graphics subsystems, small business management, criminal justice, cryptography, and compression. Through the DEFCON and BlackHat conference series, and his 16 annual Challenge parties, Caezar met and collaborated with some of the best minds of the last decade. His contributions to security include the first printable-character exploit encoding, the first public discussion of remedies for distributed denial of service, the (patented) first general-purpose fuzzer, a method for trust in decentralized networks (now mirrored by DNSSEC), plus several wireless networking inventions. Today, Riley is the Security Strategist for Leviathan Security Group, where he coaches start-up firms through maturing their secure software development lifecycle. As a trusted advisor to these small firms, Mr. Eller also helps to liaise with investors and major account clients.

 

Available Media