Workshop Program

All sessions will be held in Harbor B unless otherwise noted.

The full papers published by USENIX for the workshop are available for download as an archive or individually below. Copyright to the individual works is retained by the author[s].

Attendee Files 

 

Monday, August 18, 2014

8:30 a.m.–9:00 a.m. Monday

Continental Breakfast

Harbor Foyer

9:00 a.m.–11:00 a.m. Monday

Novel Approaches to Exploit Testing

Session Chair: Terry V. Benzel, USC Information Sciences Institute (ISI)

TESTREX: a Testbed for Repeatable Exploits

Stanislav Dashevskyi and Daniel Ricardo dos Santos, University of Trento and Fondazione Bruno Kessler; Fabio Massacci, University of Trento; Antonino Sabetta, SAP Labs

Web applications are the target of many known exploits and also a fertile ground for the discovery of security vulnerabilities. Those applications may be exploitable not only because of the vulnerabilities in their source code, but also because of the environments on which they are deployed and run. Execution environments usually consist of application servers, databases and other supporting applications. In order to test whether known exploits can be reproduced in different settings, better understand their effects and facilitate the discovery of new vulnerabilities, we need to have a reliable testbed. In this paper, we present TESTREX, a testbed for repeatable exploits, which has as main features: packing and running applications with their environments; injecting exploits and monitoring their success; and generating security reports. We also provide a corpus of example applications, taken from related works or implemented by us.

The code is available on the web at https://securitylab.disi.unitn.it/doku.php?id=software

Available Media

Safe and Automated Live Malware Experimentation on Public Testbeds

Abdulla Alwabel, Hao Shi, Genevieve Bartlett, and Jelena Mirkovic, USC/Information Sciences Institute

In this paper, we advocate for publicly accessible live malware experimentation testbeds. We introduce new advancements for high-fidelity transparent emulation and fine-grain automatic containment that make such experimentation safe and useful to researchers, and we propose a complete, extensible live-malware experimentation framework. Our framework, aided by our new technologies, facilitates a qualitative leap from current experimentation practices. It enables specific, detailed and quantitative understanding of risk, and safe, fully automated experimentation by novice users, with maximum utility to the researcher. We present preliminary results that demonstrate effectiveness of our technologies and map the path forward for public live-malware experimentation.

Available Media

Large-Scale Evaluation of a Vulnerability Analysis Framework

Nathan S. Evans, Azzedine Benameur, and Matthew C. Elder, Symantec Research Labs

Ensuring that exploitable vulnerabilities do not exist in a piece of software written using type-unsafe languages (e.g., C/C++) is still a challenging, largely unsolved problem. Current commercial security tools are improving but still have shortcomings, including limited detection rates for certain vulnerability classes and high falsepositive rates (which require a security expert’s knowledge to analyze). To address this there is a great deal of ongoing research in software vulnerability detection and mitigation as well as in experimentation and evaluation of the associated software security tools. We present the secondgeneration prototype of the MINESTRONE architecture along with a large-scale evaluation conducted under the IARPA STONESOUP program. This second evaluation includes improvements in the scale and realism of the test suite with real-world test programs up to 200+KLOC. This paper presents three main contributions. First, we show that the MINESTRONE framework remains a useful tool for evaluating real-world software for security vulnerabilities. Second, we enhance the existing tools to provide detection of previously omitted vulnerabilities. Finally, we provide an analysis of the test corpus and give lessons learned from the test and evaluation.

Available Media

Simulating Malicious Insiders in Real Host-Monitored User Data

Kurt Wallnau, Brian Lindauer, and Michael Theis, Carnegie Mellon University; Robert Durst, Terrance Champion, Eric Renouf, and Christian Petersen, Skaion Corp.

Our task is to produce test data for a research program developing a new generation of insider threat detection technologies. Test data is created by injecting fictional malicious activity into a background of real user activity. We rely on fictional narratives to specify threats that simulate realistic social complexity, with “drama as data” as a central organizing metaphor. Test cases are scripted as episodes of a fictional crime series, and compiled into time-series data of fictional characters. Users are selected from background to perform the role of fictional characters that best match their real-world roles and activities. Fictional activity is blended into the activity of real users in the cast. The cast and unmodified background users perform dramas in test windows: performances are test cases. Performances by different casts of users, or by the same cast of users in different test windows, constitute distinct test cases.

Available Media
11:00 a.m.–11:30 a.m. Monday

Break with Refreshments

Harbor Foyer

11:30 a.m.–12:30 p.m. Monday

Panel

Cybersecurity Experimentation of the Future (CEF)

Moderator: David Balenson, SRI International

Panelists: Stephen Schwab, USC Information Sciences Institute (ISI); Eric Eide, University of Utah; Laura Tinnel, SRI International

Available Media
12:30 p.m.–2:00 p.m. Monday

Luncheon for Workshop Attendees


Harbor GH

2:00 p.m.–3:30 p.m. Monday

Metrics for Quantitative Security Evaluation

Session Chair: Eric Eide, University of Utah

Effective Entropy: Security-Centric Metric for Memory Randomization Techniques

William Herlands, Thomas Hobson, and Paula J. Donovan, MIT Lincoln Laboratory

User space memory randomization techniques are an emerging field of cyber defensive technology which attempts to protect computing systems by randomizing the layout of memory. Quantitative metrics are needed to evaluate their effectiveness at securing systems against modern adversaries and to compare between randomization technologies. We introduce Effective Entropy, a measure of entropy in user space memory which quantitatively considers an adversary’s ability to leverage low entropy regions of memory via absolute and dynamic inter-section connections. Effective Entropy is indicative of adversary workload and enables comparison between different randomization techniques. Using Effective Entropy, we present a comparison of static Address Space Layout Randomization (ASLR), Position Independent Executable (PIE) ASLR, and a theoretical fine grain randomization technique.

Available Media

DACSA: A Decoupled Architecture for Cloud Security Analysis

Jason Gionta, North Carolina State University; Ahmed Azab, Samsung Electronics Co., Ltd.; William Enck and Peng Ning, North Carolina State University; Xiaolan Zhang, Google Inc.

Monitoring virtual machine execution from the hypervisor provides new opportunities for evaluating cloud security. Unfortunately, traditional hypervisor based monitoring techniques tightly couple monitoring with internal VM operations and as a result 1) impose unacceptably high overhead to both guest and host environments and 2) do not scale. Towards addressing this problem, we present DACSA, a decoupled “Out-of-VM” cloud analysis architecture for cyber testing. DACSA leverages guest VMs that act as sensors to capture security centric information for analysis. Guest VMs and host environments incur minimal impact. We measure DACSA’s impact to VMs at 0-6% and host impact at 0-3% which is only incurred during state acquisition. As a result, DACSA can enable production environments as a testbed for security analysis.

Available Media

A Metric for the Evaluation and Comparison of Keylogger Performance

Tobias Fiebig, Janis Danisevskis, and Marta Piekarska, Technische Universität Berlin

In the field of IT security the development of Proof of Concept (PoC) implementations is a commonly accepted method to determine the exploitability of an identified weakness. Most security issues provide a rather straightforwad method of asserting the PoCs efficiency. That is, it either works or it does not. Hence, data gathering and exfiltration techniques usually remain in a position where the viability has to be empirically verified. One of these cases are mobile device keyloggers, which only recently have been starting to exploit side-channels to infer heuristic information on a user’s input. With this introduction of side channels exploiting heuristic information the performance of a keylogger may no longer be described with “it works and gathered what was typed”. Instead, the viability of the keylogger has to be assessed based on various typing speeds, user input styles and many metrics more as documented in this paper. The authors of this document provide a survey of the required metrics and features. Furthermore, they have developed a framework to assess the performance of a keylogger. This paper provides the documentation on how such a study can be conducted, while the required source code is shared online.

Available Media
3:30 p.m.–4:00 p.m. Monday

Break with Refreshments

Harbor Foyer

4:00 p.m.–5:30 p.m. Monday

Panel: Human Engagment Challenges in Cyber Testing and Training

Moderator: Chris Kanich, University of Illinois at Chicago

Panelists: Jose Fernandez, École Polytechnique de Montréal; Stefan Boesen, Dartmouth College; Richard Weiss, The Evergreen State College; Melissa Danforth, California State University, Bakersfield

Computer Security Clinical Trials: Lessons Learned from a 4-month Pilot Study

Fanny Lalonde Lévesque and José M. Fernandez, École Polytechnique de Montréal

In order for the field of computer security to progress, we need to know the true value of different security technologies and understand how they work in practice. It is important to evaluate their effectiveness as they are being used in an ecologically valid environment. To this end, we postulate that security products could be evaluated by conducting computer security clinical trials. To show the feasibility of such approach, we did a 4-month proof of concept study with 50 users, that aimed to evaluate an anti-malware product. In this paper, we present the study we performed and provide lessons learned and recommendations on the challenges, limitations and considerations of conducting computer security clinical trials.

Available Media

EDURange: Meeting the Pedagogical Challenges of Student Participation in Cybertraining Environments

Stefan Boesen and Richard Weiss, The Evergreen State College; James Sullivan and Michael E. Locasto, University of Calgary; Jens Mache and Erik Nilsen, Lewis and Clark College

This paper reflects on the challenges that arose and the lessons learned when we used hands-on cyberoperations exercises in our courses. After exploring a range of exercises and platforms (and having discovered their limitations), we designed and built an environment for hosting such exercises called EDURange.

These limitations fall into two categories: technical and pedagogical. One of the main pedagogical issues was that most existing exercises were not aimed at teaching analysis skills, (i.e. a set of practices that support the ability to achieve understanding of complex systems). On the other hand, one of the main practical issues with existing cyber-training environments involves scalability limitations imposed by the inherent resource constraints of existing testbeds. A third techno-pedagogical issue was that scenarios were not dynamic. An exercise that is always the same has limited utility in that there is little incentive for students to repeat it, and with time, the solutions can be found on the Internet. EDURange allows instructors to configure aspects of the scenarios to repeatedly create new variations of the exercises. EDURange is designed especially for the needs of teaching faculty. The scenarios we have implemented each are designed specifically to nurture the development of analysis skills in students as a complement to both theoretical security concepts and specific software tools.

Available Media

Four-Week Summer Program in Cyber Security for High School Students: Practice and Experience Report

Melissa Danforth and Charles Lam, California State University, Bakersfield

Cyber security education and outreach is a national priority. It is critical to encourage high school students to pursue studies in cyber security and related fields. High school outreach is a fundamental component of a cohesive cyber security education program. Most high school outreach programs in cyber security focus on short-term events such as a capture the flag contest or the CyberPatriot competition. While a competitive event is engaging for high school students, it does not give a comprehensive overview of cyber security education and careers.

We explored the use of a four-week, hands-on, intensive summer programfor engaging and encouraging high school students to pursue cyber security education and careers. The program brings high school students and high school teachers onto the university campus to interact with university professors and university students.

Available Media