All sessions will be held in Grand Ballroom D unless otherwise noted.
Papers are available for download below to registered attendees now and to everyone beginning Monday, August 12, 2019. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].
Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)
Monday, August 12
8:00 am–9:00 am
Continental Breakfast
Grand Ballroom Foyer
9:10 am–10:25 am
Cyberphysical and Embedded Testbeds and Techniques
Session Chair: Eric Eide, University of Utah
Design and Implementation of a Cyber Physical Testbed for Security Training
Paul Pfister, Mathew L. Wymore, Doug Jacobson, and Daji Qiao, Iowa State University
Long Experience Paper
This paper describes the work of introducing a CPS (Cyber Physical System) extension to our Internet event simulator at Iowa State University, known as ISEAGE, for Cyber Defense Competitions (CDCs). The CPS extension consists of a virtual control system to interface with CPS devices, a human machine interface (HMI) to access and control the devices, a virtual world to simulate the physical effects of the devices, and a backend to support the use of the CPS component in CDCs. These components communicate with each other using the Open Platform Communications standard. A CPS-CDC scenario is developed where participants are tasked with defending two CPS networks representing water and power utilities. To enhance the experience for participants, we also 3D-print a model of the virtual city served by these utilities. The 3D city reflects the state of the competition via LEDs that report the availability of services. The design of our CPS-CDC is highly modular, supporting any number of different CPS devices and systems, and can be adapted to a wide set of possible CPS scenarios.
Implementation of Programmable CPS Testbed for Anomaly Detection
Hyeok-Ki Shin, Woomyo Lee, Jeong-Han Yun, and HyoungChun Kim, The Affiliated Institute of ETRI
Long Preliminary Work Paper
A large number of studies have provided datasets in the field of CPS security research, but the rate of actually using these datasets is low. It is difficult to objectively compare and analyze research results based on different testbeds or datasets. Our goal is to create public datasets for CPS security researchers working for anomaly detection. It is challenging for individuals to collect long-term datasets repeatedly for a large number of scenarios. This can lead to mistakes and inaccurate information. The process of collection must be comfortable and automated. For this purpose, we constructed a testbed in which three physical control systems (GE turbine, Emerson boiler, and FESTO water treatment system) can be combined with each other through the dSPACE Hardware-in-the-loop(HIL) simulator. We have built an environment that can automatically control each sensor and control point remotely. Using this environment, it is possible to collect datasets while repeatedly driving a large number of benign/malicious scenarios for a long period while minimizing human effort. We will develop and release CPS datasets using the testbed in the future.
Triton: A Software-Reconfigurable Federated Avionics Testbed
Sam Crow and Brown Farinholt, UC San Diego; Brian Johannesmeyer, VU Amsterdam; Karl Koscher, University of Washington; Stephen Checkoway, Oberlin College; Stefan Savage, Aaron Schulman, and Alex C. Snoeren, UC San Diego; Kirill Levchenko, University of Illinois
Long Preliminary Work Paper
This paper describes the Triton federated-avionics security testbed that supports testing real aircraft electronic systems for security vulnerabilities. Because modern aircraft are complex systems of systems, the Triton testbed allows multiple systems to be instantiated for analysis in order to observe the aggregate behavior of multiple aircraft systems and identify their potential impact on flight safety. We describe two attack scenarios that motivated the design of the Triton testbed: ACARS message spoofing and the software update process for aircraft systems. The testbed allows us to analyze both scenarios to determine whether adversarial interference in their expected operation could cause harm. This paper does not describe any vulnerabilities in real aircraft systems; instead, it describes the design of the Triton testbed and our experiences using it.
One of the key features of the Triton testbed is the ability to mix simulated, emulated, and physical electronic systems as necessary for a particular experiment or analysis task. A physical system may interact with a simulated component or a system whose software is running in an emulator. To facilitate rapid reconfigurability, Triton is also entirely software reconfigurable: all wiring between components is virtual and can be changed without physical access to components. A prototype of the Triton testbed is used at two universities to evaluate the security of aircraft systems.
CAERUS: Chronoscopic Assessment Engine for Recovering Undocumented Specifications
Adam Seitz, Adam Satar, and Brian Burke, Rose-Hulman Institute of Technology; Lok Yan, Air Force Research Laboratory, Rome, NY, USA; Zachary Estrada, Rose-Hulman Institute of Technology
Short Preliminary Work Paper
A significant feature of embedded systems, in particular legacy systems, is their sensitivity to signal timing. Any modifications (e.g., security protections) to legacy systems could affect the timing of critical control signals. Some timing properties are well know (e.g., baud rates for communication). However, other timing properties are not well specified or understood. We present a hardware/software framework to recover the undocumented timing properties of embedded systems, CAERUS. CAERUS provides a record/replay mechanism with signal mutation capabilities and is built on commodity and open source components.
10:25 am–10:55 am
Break with Refreshments
Grand Ballroom Foyer
10:55 am–12:10 pm
Data and Metrics
Session Chair: Elissa Redmiles, Microsoft Research and Princeton University
Is Less Really More? Towards Better Metrics for Measuring Security Improvements Realized Through Software Debloating
Michael D. Brown and Santosh Pande, Georgia Institute of Technology
Long Research Paper
Nearly all modern software suffers from bloat that negatively impacts its performance and security. To combat this problem, several automated techniques have been proposed to debloat software. A key metric used in these works to demonstrate improved security is code reuse gadget count reduction. The use of this metric is based on the prevailing idea that reducing the number of gadgets available in a software package reduces its attack surface and makes mounting a gadget-based code reuse exploit such as return-oriented programming (ROP) more difficult for an attacker. In this paper, we challenge this idea and show through a variety of realistic debloating scenarios the flaws inherent to the gadget count reduction metric. Specifically, we demonstrate that software debloating can achieve high gadget count reduction rates, yet fail to limit an attacker’s ability to construct an exploit. Worse yet, in some scenarios high gadget count reduction rates conceal instances in which software debloating makes security worse by introducing new quality gadgets. To address these issues, we propose new metrics based on quality rather than quantity for assessing the security impact of software debloating. We show that these metrics can be efficiently calculated with our Gadget Set Analyzer tool. Finally, we demonstrate the utility of these metrics through a realistic debloating case study.
A Data-Driven Reflection on 36 Years of Security and Privacy Research
Aniqua Baset and Tamara Denning, University of Utah
Long Preliminary Work Paper
Meta-research---research about research---allows us, as a community, to examine trends in our research and make informed decisions regarding the course of our future research activities. Additionally, overviews of past research are particularly useful for researchers or conferences new to the field. In this work we use topic modeling to identify topics within the field of security and privacy research using the publications of the IEEE Symposium on Security & Privacy (1980-2015), the ACM Conference on Computer and Communications Security (1993-2015), the USENIX Security Symposium (1993-2015), and the Network and Distributed System Security Symposium (1997-2015). We analyze and present data via the perspective of topics trends and authorship. We believe our work serves to contextualize the academic field of computer security and privacy research via one of the first data-driven analyses. An interactive visualization of the topics and corresponding publications is available at https://secprivmeta.net.
Lessons from Using the I-Corps Methodology to Understand Cyber Threat Intelligence Sharing
Josiah Dykstra, Matt Fante, Paul Donahue, Dawn Varva, Linda Wilk, and Amanda Johnson, U.S. Department of Defense
Long Experience Paper
Cybersecurity researchers and practitioners continually propose products and services to secure and protect against cyberthreats. Even when backed by solid cybersecurity science, these offerings are sometimes misaligned with customers’ practical needs. The Innovation Corps (I-Corps) methodology attempts to help innovators, researchers, and practitioners maximize their success through deliberate customer discovery. The National Security Agency (NSA) has adopted I-Corps for internal innovation and optimization. In February 2019, NSA Cybersecurity Operations embarked on a study using this methodology to explore cyber threat intelligence sharing. Information sharing is a foundational practice in cybersecurity. The NSA also shares cyber indicators with authorized partners, and sought to understand how partners consumed and valued the information to better tailor it to their needs. After more than 60 customer discovery problem interviews with over 20 partners, six primary themes emerged. We describe our experiences using the I-Corps methodology to study and optimize internal processes, and lessons learned from applying it to information sharing. These insights may inform future applications of I-Corps to other areas of cybersecurity research, practice, and commercialization.
Percentages, Probabilities and Professions of Performance
Jim Alves-Foss, Center for Secure and Dependable Systems University of Idaho
Short Experience Paper
Experimental cybersecurity publication should provide readers with a reliable report of the experimental methods, dataset(s) used and full analysis of the results to allow the readers to fully understand the capabilities and limitations of the experiment, and to compare the results to other similar tools or processes. This paper provides an example of looking at experimental results a few different ways, in an attempt to get a better understanding of the underlying processes. We encourage other authors to do the same. We conclude with some basic recommendations.
12:10 pm–1:30 pm
Monday Luncheon
Terra Courtyard
1:30 pm–2:45 pm
Usability, Effects, and Impacts
Session Chair: Heather Crawford, Florida Institute of Technology
The Impact of Secure Transport Protocols on Phishing Efficacy
Zane Ma, Joshua Reynolds, Joseph Dickinson, Kaishen Wang, Taylor Judd, Joseph D. Barnes, Joshua Mason, and Michael Bailey, University of Illinois at Urbana-Champaign
Long Extended Work Paper
Secure transport protocols have become widespread in recent years, primarily due to growing adoption of HTTPS and SMTP over TLS. Worryingly, prior user studies have shown that users often do not understand the security that is provided by these protocols and may assume protections that do not exist. This study investigates how the security protocol knowledge gap impacts user behavior by performing a phishing experiment on 266 users that A/B tests the effects of HTTP/HTTPS and SMTP/SMTP+TLS on phishing susceptibility. Secure email transport had minimal effect, while HTTPS increased the click-through rate of email phishing links (72.0% HTTPS, 60.0% HTTP) and the credential-entry rate of phishing sites (58.0% HTTPS, 55.6% HTTP). However, our results are merely suggestive and do not rise to the level of statistical significance (p = 0.17 click-through, p = 0.31 credential-entry). To better understand the factors that affect credential-entry, we categorized differences in browser presentation of HTTP/HTTPS and correlated participant susceptibility with browser URL display features. We administered a follow-up survey for phishing victims, which was designed to provide qualitative insights for observed outcomes, but it did not yield meaningful results. Overall, this study is a suggestive look at the behavioral impact of secure transport protocols and can serve as a basis for future larger-scale studies.
Evaluating the Long-term Effects of Parameters on the Characteristics of the Tranco Top Sites Ranking
Victor Le Pochat, Tom Van Goethem, and Wouter Joosen, imec-DistriNet, KU Leuven
Long Extended Work Paper
Although researchers often use top websites rankings for web measurements, recent studies have shown that due to the inherent properties and susceptibility to manipulation of these rankings, they potentially have a large and unknown influence on research results and conclusions. As a response, we provide Tranco, a research-oriented approach for aggregating these rankings transparently and reproducibly.
We analyze the long-term properties of the Tranco ranking and determine whether it contains a balanced set of domains. We compute how well Tranco captures websites that are responsive, regularly visited and benign. Through one year of rankings, we also examine how the default parameters of Tranco create a stable, robust and comprehensive ranking.
Through our evaluation, we provide an understanding of the characteristics of Tranco that are important for research and of the impact of parameters on the ranking composition. This informs researchers who want to use Tranco in a sound and reproducible manner.
Comparative Measurement of Cache Configurations’ Impacts on Cache Timing Side-Channel Attacks
Xiaodong Yu, Ya Xiao, Kirk Cameron, and Danfeng (Daphne) Yao, Department of Computer Science, Virginia Tech
Long Research Paper
Time-driven and access-driven attacks are two dominant types of the timing-based cache side-channel attacks. Despite access-driven attacks are popular in recent years, investigating the time-driven attacks is still worth the effort. It is because, in contrast to the access-driven attacks, time-driven attacks are independent of the attackers’ cache access privilege.
Although cache configurations can impact the time-driven attacks’ performance, it is unclear how different cache parameters influence the attacks’ success rates. This question remains open because it is extremely difficult to conduct comparative measurements. The difficulty comes from the unavailability of the configurable caches in existing CPU products.
In this paper, we utilize the GEM5 platform to measure the impacts of different cache parameters, including Private Cache Size and Associativity, Shared Cache Size and Associativity, Cacheline Size, Replacement Policy, and Clusivity. In order to make the time-driven attacks comparable, we define the equivalent key length (EKL) to describe the attacks’ success rates. Key findings from the measurement results include (i) private cache has a key effect on the attacks’ success rates; (ii) changing shared cache has a trivial effect on the success rates, but adding neighbor processes can make the effect significant; (iii) the Random replacement policy leads to the highest success rates while the LRU/LFU are the other way around; (iv) the exclusive policy makes the attacks harder to succeed compared to the inclusive policy. We finally leverage these findings to provide suggestions to the attackers and defenders as well as the future system designers.
An Assessment of the Usability of Cybercrime Datasets
Ildiko Pete and Yi Ting Chua, University of Cambridge
Short Preliminary Work Paper
Cybersecurity datasets play a vital role in cybersecurity research. Following the identification of potential cybersecurity datasets, researchers access and manipulate data in the selected datasets. This work aims to identify potential usability issues associated with dataset access and data manipulation through a case study of the data sharing process at the Cambridge Cybercrime Centre (CCC). We collect survey response from current users of the datasets offered by the CCC, and apply a thematic analysis approach to identify obstacles in the uptake of these datasets, and areas of improvement in the data sharing process. The identified themes suggest that users' level of technological competence, including previous experiences with other datasets, facilitate the uptake of CCC's datasets. Additionally, users' experiences with different stages of the data sharing process, such as dataset usage and its various aspects, including downloading and setting up the dataset, highlight areas of improvement. We conclude that addressing the identified issues would facilitate cybersecurity dataset adoption in the wider research community.
2:45 pm–2:55 pm
Short Break
2:55 pm–4:00 pm
Problems and Approaches
Session Chair: David Balenson, SRI International
Automated Attack Discovery in Data Plane Systems
Qiao Kang, Jiarong Xing, and Ang Chen, Rice University
Short Preliminary Work Paper
Recently, researchers have developed a wide range of distributed systems that rely on programmable data planes in emerging switch hardware. Unlike traditional SDN switches, these new switches can be reconfigured to support user-defined protocols, customized packet processing, and sophisticated state. However, despite their popularity, one aspect that has received very little attention is their security implications.
This paper describes our ongoing investigation on a new class of attacks to these systems, which we call sensitivity attacks. We found that an attacker can generate malicious traffic patterns to "flip" the expected behaviors of a data plane system. We propose an approach to discovering attack vectors in a given data plane system and generating patches, both in an automated manner, and we present a set of preliminary experiments to demonstrate the feasibility of this approach.
Applications and Challenges in Securing Time
Fatima M. Anwar and Mani Srivastava, UCLA
Short Preliminary Work Paper
In this paper, we establish the importance of trusted time for the safe and correct operation of various applications. There are, however, challenges in securing time against hardware timer manipulation, software attacks, and malicious network delays on current systems. To provide security of time, we explore the timing capabilities of trusted execution technologies that put their root of trust in hardware. A key concern is that these technologies do not protect time integrity and are susceptible to various timing attacks by a malicious operating system and an untrusted network. We argue that it is essential to safeguard time-based primitives across all layers of a time stack – the hardware timers, platform software, and network time packets. This paper provides a detailed examination of vulnerabilities in current time services, followed by a set of requirements to build a secure time architecture.
IDAPro for IoT Malware analysis?
Sri Shaila G, Ahmad Darki, Michalis Faloutsos, Nael Abu-Ghazaleh, and Manu Sridharan, University of California, Riverside
Long Research Paper
Defending against the threat of IoT malware will require new techniques and tools. An important security capability, that precedes a number of security analyses, is the ability to reverse engineer IoT malware binaries effectively. A key question is whether PC-oriented disassemblers can be effective on IoT malware, given the difference in the malware programs and the processors that support them. In this paper, we develop a systematic approach and a tool for evaluating the effectiveness of disassemblers on IoT malware binaries. The key components of the approach are: (a) we find the source code for 20 real-world malware programs, (b) we compile them to form a test set of 240 binaries using various compiler optimization options, device architectures, and consid- ering both stripped and unstripped versions of the binaries, and (c) we establish the ground-truth for all these binaries for six disassembly accuracy metrics, such as the percentage of correctly disassembled instructions, and the accuracy of the control flow graph. Overall, we find that IDA Pro performs well for unstripped binaries with a precision and recall accuracy of over 85% for all the metrics. However, IDA Pro’s performance deteriorates significantly with stripped binaries, mainly because the recall accuracy of identifying the start of functions drops to around 60% for both platforms. The results for the stripped ARM and MIPS binaries are similar to stripped x86 binaries in [1]. Interestingly, we find that most compiler optimization options, except the -O3 option for the MIPS architecture, do not cause any noticeable effect in the accuracy. We view our approach as an important capability for assessing and improving reverse engineering tools focusing on IOT malware.
Lessons Learned from 10k Experiments to Compare Virtual and Physical Testbeds
Jonathan Crussell, Thomas M. Kroeger, David Kavaler, Aaron Brown, and Cynthia Phillips, Sandia National Laboratories
Short Experience Paper
Virtual testbeds are a core component of cyber experimentation as they allow for fast and relatively inexpensive modeling of computer systems. Unlike simulations, virtual testbeds run real software on virtual hardware which allows them to capture unknown or complex behaviors. However, virtualization is known to increase latency and decrease throughput. Could these and other artifacts from virtualization undermine the experiments that we wish to run?
For the past three years, we have attempted to quantify where and how virtual testbeds differ from their physical counterparts to address this concern. While performance differences have been widely studied, we aim to uncover behavioral differences. We have run over 10,000 experiments and processed over half a petabyte of data. Complete details of our methodology and our experimental results from applying that methodology are published in previous work. In this paper, we describe our lessons learned in the process of constructing and instrumenting both physical and virtual testbeds and analyzing the results from each.
4:00 pm–4:30 pm
Break with Refreshments
Grand Ballroom Foyer
4:30 pm–5:30 pm
Testbeds and Frameworks
Session Chair: Jelena Mirkovic, USC Information Sciences Institute (ISI)
A Multi-level Fidelity Microgrid Testbed Model for Cybersecurity Experimentation
Aditya Ashok, Siddharth Sridhar, Tamara Becejac, Theora Rice, Matt Engels, Scott Harpool, Mark Rice, and Thomas Edgar, Pacific Northwest National Laboratory
Long Experience Paper
When experimenting with cybersecurity technologies for industrial control systems, it is often difficult to develop a realistic, self-contained model that provides an ability to easily measure the effects of cyber behavior on the associated physical system. To address this challenge, we have created and instantiated a microgrid cyber-physical model, where both the power distribution and the individual loads are under the control and authority of one entity. This enables cybersecurity experimentation where attacks against the physical system (grid and buildings) can be measured and defended from a single entity's infrastructure. To achieve the appropriate levels of fidelity for cybersecurity effects, our microgrid model integrates multiple levels of simulation, hardware-in-the-loop, and virtualization. In this paper, we present how we designed and instantiated this test case model in a testbed infrastructure, our efforts to validate its operation, and an exemplary multistage attack scenario to showcase the model's utility.
Proteus: A DLT-Agnostic Emulation and Analysis Framework
Russell Van Dam, Thien-Nam Dinh, Christopher Cordi, Gregory Jacobus, Nicholas Pattengale, and Steven Elliott, Sandia National Laboratories
Long Research Paper
This paper presents Proteus, a framework for conducting rapid, emulation-based analysis of distributed ledger technologies (DLTs) using FIREWHEEL, an orchestration tool that assists a user in building, controlling, observing, and analyzing realistic experiments of distributed systems. Proteus is designed to support any DLT that has some form of a "transaction" and which operates on a peer-to-peer network layer. Proteus provides a framework for an investigator to set up a network of nodes, execute rich agent-driven behaviors, and extract run-time observations. Proteus relies on common features of DLTs to define agent-driven scenarios in a DLT-agnostic way allowing for those scenarios to be executed against different DLTs. We demonstrate the utility of using Proteus by executing a 51% attack on an emulated Ethereum network containing 2000 nodes.
The DComp Testbed
Ryan Goodfellow, Stephen Schwab, Erik Kline, Lincoln Thurlow, and Geoff Lawler, Information Sciences Institute
Long Experience Paper
The DComp Testbed effort has built a large-scale testbed, combining customized nodes and commodity switches with modular software to launch the Merge open source testbed ecosystem. Adopting EVPN routing, DCompTB employs a flexible and highly adaptable strategy to provision network emulation and infrastructure services on a per-experiment basis. Leveraging a clean separation of the experiment creation process into realization at the Merge portal and materialization on the DCompTB site, the testbed implementation embraces modularity throughout. This enables a well-defined orchestration system and an array of reusable modular tools to perform all essential functions of the DCompTB. Future work will evaluate the robustness, performance and maintainability of this testbed design as it becomes heavily used by research teams to evaluate opportunistic edge computing prototypes.
5:45 pm–6:45 pm
Monday Happy Hour
Terra Courtyard
Sponsored by Carnegie Mellon University Privacy Engineering
Mingle with other attendees while enjoying snacks and beverages. Attendees of all co-located events taking place on Monday are welcome.