Workshop Program

All sessions will be held in the Auditorium unless otherwise noted.

August 6, 2012

8:30 a.m.–10:00 a.m. Monday

Testbed Technology for Cyber Security

A Secure Architecture for the Range-Level Command and Control System of a National Cyber Range Testbed

Michael Rosenstein and Frank Corvese, Applied Visions Inc., Secure Decisions Division

In recent years, cyber security researchers have become burdened by the time and cost necessary to instantiate secure testbeds suitable for analyzing new threats or evaluating emerging technologies [1]. To alleviate this, DARPA initiated the National Cyber Range (NCR) program to develop the architecture and software tools needed for a secure, self-contained cyber testing facility. Among NCR’s goals was the development of a range capable of rapid and automated reconfiguration of resources, broad scalability, and support for running simultaneous experiments at different security levels [2].

In this paper we present our architecture for the Range-level Command & Control System (RangeC2) developed as part of the Johns Hopkins University Applied Physics Laboratory’s implementation of the NCR [3]. Our discussion includes the RangeC2’s functional and non-functional requirements, the rationale behind its partitioning into layered subsystems, an analysis of each subsystem’s fundamental mechanisms, and an in-depth look at their processing paradigms and data flows.

To meet the demands of this range, the RangeC2 was required to perform three primary jobs: 1) management of all range resources; 2) management of numerous concurrent experiments; and 3) enforcement of each experiment’s resource security and perimeter isolation. Our discussion of the architecture will show how these requirements were met while overcoming the RangeC2’s most critical challenges.

Available Media

Beyond Disk Imaging for Preserving User State in Network Testbeds

Jelena Mirkovic, Abdulla Alwabel, and Ted Faber, USC Information Sciences Institute

Many network testbeds today allow users to create their own disk images as a way of saving experimental state between allocations. We examine the effect of this practice on testbed operations. We find that disk imaging is very popular among both research and class users. Excessive disk image creation makes OS upgrades and patches time-demanding, leading over time to experiments that use old and vulnerable images. Since older images are not supported on new testbed hardware this hurts users by reducing their chance of successful resource allocation. Finally, disk images are usually large requiring excessive storage space on testbeds.

We then propose and evaluate three alternatives to disk imaging. We find that each approach significantly reduces storage requirements, and produces a list of OS image customizations that may help testbed users upgrade their images to newer OS versions. While this would still be a very manual process, we believe our results show promise and identify need for further research in this area.

Available Media

Towards a Framework for Evaluating BGP Security

Olaf Maennel and Iain Phillips, Loughborough University; Debbie Perouli, Purdue University; Randy Bush, Internet Initiative Japan; Rob Austein, Dragon Research Labs; Askar Jaboldinov, Loughborough University

Security and performance evaluation of Internet protocols can be greatly aided by emulation in realistic deployment scenarios. We describe our implementation of such methods which uses high-level abstractions to bring simplicity into a virtualized test-lab.

We argue that current test-labs have not adequately captured those challenges, partly because their design is too static. To achieve more flexibility and to allow the experimenter to easily deploy many alternative scenarios we need abstractions that allow auto-configuration and auto-deployment of real router and server code in a multi-AS infrastructure. We need to be able to generate scenarios for multi-party players in a fully isolated emulated test-lab and deploy the network using virtualized routers, switches, and servers.

In this paper, our abstractions are specifically designed to evaluate the BGP security framework currently being documented by the IETF SIDR working group. We capture the relevant aspects of the SIDR security proposals, and allow experimenters to evaluate the technology in topologies of real router and server code. We believe such methods are also useful for teaching newcomers and operators, as it allows them to gain experience in a sandbox before deployment. It allows security experts to set up controlled experiments at various levels of complexity, and concentrate on discovering weaknesses, instead of having to spend time on tedious configuration tasks. Finally, it allows router vendors and implementers to test their code and to perform scalability evaluation.

Available Media
10:00 a.m.–10:30 a.m. Monday

Break

Grand Ballroom Foyer

10:30 a.m.–Noon Monday

Malware and Attacks

Analyzing Resiliency of the Smart Grid Communication Architectures under Cyber Attack

Anas AlMajali, Arun Viswanathan, and Clifford Neuman, USC/Information Sciences Institute

Smart grids are susceptible to cyber-attack as a result of new communication, control and computation techniques employed in the grid. In this paper, we characterize and analyze the resiliency of smart grid communication architecture, specifically an RF mesh based architecture, under cyber attacks. We analyze the resiliency of the communication architecture by studying the performance of high-level smart grid functions such as metering, and demand response which depend on communication. Disrupting the operation of these functions impacts the operational resiliency of the smart grid. Our analysis shows that it takes an attacker only a small fraction of meters to compromise the communication resiliency of the smart grid. We discuss the implications of our result to critical smart grid functions and to the overall security of the smart grid.

Available Media

Virtual Machine Introspection in a Hybrid Honeypot Architecture

Tamas K. Lengyel, Justin Neumann, and Steve Maresca, University of Connecticut; Bryan D. Payne, Nebula, Inc.; Aggelos Kiayias, University of Connecticut

With the recent advent of effective and practical virtual machine introspection tools, we revisit the use of hybrid honeypots as a means to implement automated malware collection and analysis. We introduce VMI-Honeymon, a high-interaction honeypot monitor which uses virtual machine memory introspection on Xen. VMI-Honeymon remains transparent to the monitored virtual machine and bypasses reliance on the untrusted guest kernel by utilizing memory scans for state reconstruction. VMI-Honeymon builds on open-source introspection and forensics tools that provide a rich set of information about intrusion and infection processes while enabling the automatic capture of the associated malware binaries. Our experiments show that using VMI-Honeymon in a hybrid setup expands the range of mal-ware captures and is effective in capturing both known and unclassified malware samples.

Available Media

Do Malware Reports Expedite Cleanup? An Experimental Study

Marie Vasek and Tyler Moore, Southern Methodist University

Web-based malware is pervasive. Miscreants compromise insecure hosts or even set up dedicated servers to distribute malware to unsuspecting users. This scourge is mainly fought by the voluntary action of private actors who detect and report infections to affected site owners, hosting providers and registrars. In this paper we describe an experiment to assess whether sending reports to affected parties makes a measurable difference in cleaning up malware. Using community reports of malware submitted to StopBadware over two months in Fall 2011, we find evidence that detailed notices are immediately effective: 32% of malware-distributing websites are cleaned within one day of sending a notice, compared to just 13% of sites not receiving a notice. The improved cleanup rate holds for longer periods, too – 62% of websites receiving a detailed notice were cleaned up after 16 days, compared to 45% of websites not receiving a notice. It turns out that including details describing the compromise is essential for the notice to work – sending reports with minimal descriptions of the malware was found to be roughly as effective as not sending reports at all. Furthermore, we present evidence that sending multiple notices from two sources is not helpful. Instead, only the first transmitted notice makes a difference.

Available Media
Noon–1:30 p.m. Monday

Workshop Luncheon

Grand EFGH

1:30 p.m.–3:00 p.m. Monday

Anonymity and Privacy

Conducting an Ethical Study of Web Traffic

John F. Duncan and L. Jean Camp, Indiana University

We conducted a study of student web browsing habits at Indiana University’s Bloomington campus, in which we examined the web page requests of over 1,000 students during a period of two months. In this paper, we discuss the details of the study development and implementation from the point of view of ethical design. Concerns with stakeholder privacy, the quality of study data collection, human subjects research protocols, and unexpected data anomalies are presented in order to illustrate the many difficulties and ethical pitfalls confronting network researchers even at this small scale. Success and failures to meet the principles of ethical design are highlighted. A secondary contribution is the evolution of the instruments that were developed through the human subjects process. Finally, we discuss the impact of the Menlo Report (DHS-2011-0074) and similar documents on the future directions of network and security research.

Available Media

Methodically Modeling the Tor Network

Rob Jansen, U.S. Naval Research Laboratory; Kevin Bauer, University of Waterloo; Nicholas Hopper, University of Minnesota; Roger Dingledine, The Tor Project

Live Tor network experiments are difficult due to Tor’s distributed nature and the privacy requirements of its client base. Alternative experimentation approaches, such as simulation and emulation, must make choices about how to model various aspects of the Internet and Tor that are not possible or not desirable to duplicate or implement directly. This paper methodically models the Tor network by exploring and justifying every modeling choice required to produce accurate Tor experimentation environments. We validate our model using two state-of-the-art Tor experimentation tools and measurements from the live Tor network. We find that our model enables experiments that characterize Tor’s load and performance with reasonable accuracy.

Available Media

Collaborative Red Teaming for Anonymity System Evaluation

Sandy Clark, University of Pennsylvania; Chris Wacek, Georgetown University; Matt Blaze and Boon Thau Loo, University of Pennsylvania; Micah Sherr and Clay Shields, Georgetown University; Jonathan Smith, University of Pennsylvania

This paper describes our experiences as researchers and developers during red teaming exercises of the SAFEST anonymity system. We argue that properly evaluating an anonymity system — particularly one that makes use of topological information and diverse relay selection strategies, as does SAFEST— presents unique challenges that are not addressed using traditional red teaming techniques. We present our efforts towards meeting these challenges, and discuss the advantages of a collaborative red teaming paradigm in which developers play a supporting role during the evaluation process.

Available Media
3:00 p.m.–3:30 p.m. Monday

Break

Grand Ballroom Foyer

3:30 p.m.–5:00 p.m. Monday

Games and Studies in Academic Environments

Students Who Don’t Understand Information Flow Should Be Eaten: An Experience Paper

Roya Ensafi, Mike Jacobi, and Jedidiah R. Crandall, University of New Mexico

Information flow is still relevant, from browser privacy to side-channel attacks on cryptography. However, many of the seminal ideas come from an era when multi-level secure systems were the main subject of study. Students have a hard time relating the material to today’s familiar commodity systems.

We describe our experiences developing and utilizing an online version of the game Werewolves of Miller’s Hollow (a variant of Mafia). To avoid being eaten, students must exploit inference channels on a Linux system to discover “werewolves” among a population of “townspeople.” Because the werewolves must secretly discuss and vote about who they want to eat at night, they are forced to have some amount of keystroke and network activity in their remote shells at this time. In each instance of the game the werewolves are chosen at random from among the townspeople, creating an interesting dynamic where students must think about information flow from both perspectives and keep adapting their techniques and strategies throughout the semester.

This game has engendered a great deal of enthusiasm among our students, and we have witnessed many interesting attacks that we did not anticipate. We plan to release the game under an open source software license.

Available Media

Disturbed Playing: Another Kind of Educational Security Games

Sebastian Koch, Joerg Schneider, and Jan Nordholz, Technische Universitaet Berlin

Games have a long tradition in teaching IT security: Ranging from international capture-the-flag competitions played by multiple teams to educational simulation games where individual students can get a feeling for the effects of security decisions. All these games have in common, that the game’s main goal is keeping up the security. In this paper, we propose another kind of educational security games which feature a game goal unrelated to IT security. However, during the game session gradually more and more attacks on the underlying infrastructure disturb the game play. Such a scenario is very close to the reality of an IT security expert, where establishing security is just a necessary requirement to reach the company’s goals. By preparing and ana- lyzing the game sessions, the students learn how to develop a security policy for a simplified scenario. Additionally, the students learn to decide when to apply technical security measures, when to establish emergency plans, and which risks cannot be covered economically.

As an example for such a disturbed playing game, we present our distributed air traffic control scenario. The game play is disturbed by attacking the integrity and availability of the underlying network in a coordinated manner, i.e., all student teams experience the same failures at the same state of the game. Beside presenting the technical aspects of the setup, we are also discussing the didactic approach and the experiences made in the last years.

Available Media

Learning from Early Attempts to Measure Information Security Performance

Jing Zhang, University of Michigan; Robin Berthier, University of Illinois at Urbana-Champaign; Will Rhee and Michael Bailey, University of Michigan; Partha Pal, BBN Technologies; Farnam Jahanian, University of Michigan; William H. Sanders, University of Illinois at Urbana-Champaign

The rapid evolution of threat ecosystems and the shifting focus of adversarial actions complicate efforts to assure security of an organization’s computer networks. Efforts to build a rigorous science of security, one consisting of sound and reproducible empirical evaluations, start with measures of these threats, their impacts, and the factors that influence both attackers and victims. In this study, we present a careful examination of the issue of account compromise at two large academic institutions. In particular, we evaluate different hypotheses that capture common perceptions about factors influencing victims (e.g., demographics, location, behavior) and about the effectiveness of mitigation efforts (e.g., policy, education). While we present specific and sometimes surprising results of this analysis at our institutions, our goal is to highlight the need for similar in-depth studies elsewhere.

Available Media