Enigma 2020 Conference Program

All sessions will be held in the Grand Ballroom unless otherwise noted.

Enigma 2020 Program Grid

View the program in mobile-friendly grid format.

Download Attendee List (Available to Enigma Conference Attendees)
Note: Only includes attendees who opted in to appearing on the list. Log in to your USENIX account to access this file.

Attendee Files 
Enigma 2020 Attendee List (PDF)

Monday, January 27, 2020

7:30 am–8:45 am

Continental Breakfast

Grand Ballroom Foyer

8:45 am–9:00 am

Opening Remarks, Day 1

Program Co-Chairs: Ben Adida, VotingWorks, and Daniela Oliveira, University of Florida

9:00 am–10:15 am

Panel

Encrypted Messaging

Monday, 9:00 am10:15 am

Moderator: Jon Callas, Senior Technology Fellow, ACLU

Panelists: Riana Pfefferkorn, Associate Director of Surveillance and Cybersecurity, Stanford Center for Internet and Society; Daniel J. Weitzner, Founding Director, MIT Internet Policy Research Initiative; Matt Blaze, Georgetown University

Available Media

In the panel, our four experts will discuss the background of "Crypto Wars" of the past, and thus how we got to our current situation; how this Crypto War is different from the last one(s); the international issues and where the present threats to encryption are coming from; and how we experts at large might measure the problem and see what other mitigations might help before we go shooting ourselves in the face.

10:15 am–10:45 am

Break with Refreshments

Grand Ballroom Foyer

10:45 am–12:15 pm

Other People's Code

Session Chair: Aanchal Gupta, Facebook

Securing the Software Supply Chain

Monday, 10:45 am11:15 am

Filippo Valsorda, Google

Available Media

Modern software development relies increasingly on code reuse in the form of third-party dependencies from the Open Source ecosystem. Although each programming language has its own tooling and culture, they all encourage a widespread model of adoption without detailed review, and of eager updates to new versions.

This transitive trust in the dependencies authors led to a string of high-profile availability issues and attacks: the recent rest-client Ruby gem compromise, the similar event-stream Node package compromise, the infamous left-pad incident, and many more. These episodes have patterns that we can learn from as an industry: they either involve attackers compromising the developer credentials and uploading new compromised versions, or they involve the ecosystem losing access to the contents of existing versions.

The new Go checksum database—deployed in 2019—was designed to secure the Go modules ecosystem without requiring any extra work by module authors, like extra key management. It provides a centralized log for the checksums of all versions of all public modules. It then deploys the same technology as Certificate Transparency to keep this central authority accountable. It does not introduce any new accounts that can be compromised, and it enables third-party auditors to offer new version notifications to authors. Finally, it's designed to be easily cacheable, enabling a tradeoff in resources and privacy, from simple proxies all the way to full mirrors that don't leak any information about what modules are in use.

This talk will look at the high level design of the checksum database, and how it can be applied to other software package ecosystems to help secure the software supply chain.

Filippo Valsorda, Google

Filippo Valsorda (@FiloSottile) is a cryptography engineer on the Go team at Google. He acts as primary security coordinator for the Go Project and owns the Go cryptography standard libraries. Since joining the team, he introduced TLS 1.3 support in the Go standard library and co-designed the Go module authentication system, the Go Checksum Database. Previously at Cloudflare, he developed its experimental TLS 1.3 stack and kicked DNSSEC until it became something deployable.

Third-Party Integrations: Friend or Foe?

Monday, 11:15 am11:45 am

Sarah Harvey, Square Inc

Available Media

Microservice architecture is becoming increasingly common with the democratization of cloud computing power, and more and more organizations are realizing that it's often simply easy to pay for a particular service instead of building it from scratch. The result is that many large organizations often have to grapple with hundreds if not thousands of such third-party integrations. However, performing risk analysis about these interactions—especially when it relates to the sharing of data—can be extremely time-consuming if not impossible.

In this talk, we will briefly cover typical third-party integration flows within an organization, from request to implementation. We will identify common gaps in security visibility and access, and discuss various solutions with their degree of efficacy as we have measured. We will argue that it is through these improvements that you will be able have not just a more holistic, but more consistent risk map of your organization's assets.

The aim of this talk is to show that the boring, grueling work in security is just as important as exciting 0-days! We hope to also show that there are still new exciting metrics and incident response systems you can derive from these processes.

Sarah Harvey, Square Inc

Sarah is a software engineer on a privacy engineering team at Square. Her background includes 4+ years of industry experience in security/privacy infrastructure design and engineering and 4 years of academic privacy research. She has a variety of event organizing and speaking experience; highlights include speaking at and co-organizing BSidesSF 2019, organizing and presenting a 300+ person CTF workshop at Grace Hopper, and giving a series of funny lightning talks on infrastructure security and privacy challenges.

She also has given talks as a hologram, and in general never takes herself seriously.

She can be followed for cats and tech humor on Twitter: @worldwise001.

Stack Overflow: A Story of Two Security Tales

Monday, 11:45 am12:15 pm

Felix Fischer, Technical University of Munich

Available Media

Stack Overflow helps software developers from all over the world get their daily programming tasks done. Knowledge and source code shared via this platform shape digital services and applications that are used by billions of people every day. The tremendous impact Stack Overflow has had on today's software urges us and many other researchers to investigate to what extent information security is part of the discussions on Stack Overflow, what the biggest security problems are, and how developers solve them.

Our results tell a story of two tales. In the first tale, Stack Overflow seems to be the source of all evil. It's responsible for unintentionally marketing and distributing severe software vulnerabilities we traced in high-profile applications installed by billions of people. It's been demonstrated that these vulnerabilities would allow practical attacks and theft of credentials, credit cards, and other private data. The second tale tells a complete opposite story, where Stack Overflow becomes one of the most usable and effective tools in helping developers get security right. The moral of both stories is that it only takes small design tweaks to get from one to the other.

We are deeply convinced that these kinds of modifications could have an enormous positive effect on software security in general due to the pervasive use of Stack Overflow. Therefore, we want to highlight the most important results from usable security research over the last years to set the ball rolling. These include identified major security problems, what impact they had on real-world applications, and how we modified Stack Overflow to effectively help people develop secure software.

Felix Fischer, Technical University of Munich

Felix Fischer is a Research Associate and PhD student of Jens Grossklags at the Chair of Cyber Trust at Technical University of Munich. He studies the interaction of people with information security and privacy technologies. His most recent publications focus on software engineers struggling with getting cryptography right and explore machine learning as a tool for usable security and privacy. His work has frequently been published at top-tier venues for security and privacy research, such as IEEE S&P, ACM CCS, and USENIX Security.

12:15 pm–1:30 pm

Lunch

The Atrium
Sponsored by Salesforce

1:30 pm–3:30 pm

Fundamentals and Infrastructure

Session Chair: Ryan Nakamoto, Facebook

Catch Me If You Can!—Detecting Sandbox Evasion Techniques

Monday, 1:30 pm2:00 pm

Francis Guibernau, Deloitte

Available Media

Just like great escape artists captivate an innocent audience with perfectly measured and planned escapes, these extraordinary illusionists of the cyber world also aim for the same goal. Using meticulous, innovative maneuvers and their specially crafted malware pieces, they are able to analyze their surroundings to detect and evade sandbox environments. At this point, they can choose to conceal their real behavior to carry out their grand finale without being detected. But, how can we see beyond the surface? How can we harden our sandbox systems in order to prevent such evasion techniques?

In this talk, we are going to reveal the techniques used by these attackers to evade sandboxes and avoid being analyzed. We will walk you through the different approaches malware takes in order to achieve this and remain undetected. Additionally, we will show you unique malware samples to examine how they implement these techniques. Finally, we will demonstrate how, thanks to the use of MITRE ATT&CK Framework, we are able to document these techniques and improve our detection and analysis systems.

Francis Guibernau, Deloitte

Francis is a Security Researcher Analyst at Deloitte Argentina's Cyber Threat Intelligence (CTI) Team specializing in tracking APT Group's activities worldwide, by analyzing their tools, tactics, and techniques with the help of Mitre ATT&CK Framework. He's currently finishing his studies on Information System Engineering at the Universidad Tecnológica Nacional (UTN).

BeyondProd: The Origin of Cloud-Native Security at Google

Monday, 2:00 pm2:30 pm

Brandon Baker, Google

Available Media

Containers and microservices are increasingly being used to deploy applications, and with good reason, given their portability, simple scalability and lower management burden. In changing from an architecture based on monolithic applications to one using distributed microservices, known as a "cloud-native" architecture, there are changes not only to operations but also to security.

Where BeyondCorp states that user trust should be dependent on characteristics like the context-aware state of devices and not the ability to connect to the corp network, BeyondProd states that service trust should be dependent on characteristics like code provenance and service identity, not the location in the production network, such as IP or hostname identity.

Just like the security model evolved beyond the castle walls with BeyondCorp, BeyondProd proposes a cloud-native security architecture that assumes no trust between services, provides isolation between multi-tenant workloads, verifiable enforcement of what applications are deployed, automated vulnerability management, and strong access controls to critical data. These principles led Google to innovate several new systems in order to meet these requirements.

In this talk, we will cover what a cloud-native architecture is, and why it's different from a security point of view; design principles for security in a cloud-native world; how Google addressed these requirements and the internal tools used as part of this architecture; and how your organization might approach the same requirements. You'll come away with a better understanding of how to think about cloud-native security, and more capably decide what tools you might need to secure your infrastructure.

Brandon Baker, Google

Brandon Baker is Tech Lead for Cloud Security at Google, where he is responsible for security strategy and technical direction for the Google Cloud Platform. Brandon started the Cloud Security team at Google in 2010, building core security features to protect Google's Cloud users and infrastructure from compromise. Since the discovery of Spectre/Meltdown in July 2017, Brandon has also worked to address CPU side-channel issues from the Cloud perspective.

Brandon has specialized in virtualization, operating system, cloud, and CPU security for over 20 years, at companies including Google, Microsoft, Digex, and the U.S. Department of Defense. Brandon has also contributed to Trusted Computing research, standards bodies, and developments across the industry. He currently resides in Redmond, WA and enjoys hiking and photographing the beautiful mountains and coasts of Washington state. Brandon holds a B.Sc. degree in Computer Science from Texas A&M University.

Bringing Usable Crypto to 7 Million Developers

Monday, 2:30 pm3:00 pm

Kenn White, MongoDB

Available Media

Most databases in use today have an implicit central trust model—the idea being that system operators have full privilege to access and manage the information being processed in order to perform their work. This poses a problem in at least two particular cases: one, when the workload contains highly sensitive or confidential information, and two, when data are being processed and stored on third-party infrastructure such as a public cloud provider. In a central (or server-side) trust model, a live database breach or leak from publicly-exposed backups or logs can be catastrophic. One approach to protect both data-at-rest and data-in-use is client-side end-to-end encryption, in which sensitive data are encrypted at the application level before ever being sent to the server. Unfortunately, for mature modern databases, few options for native client-side encryption have existed for developers, particularly in the open-source world.

This talk will present lessons learned from nearly two years of engineering work spanning every major programming language, hardware platform, and operating system, to bring simple, usable authenticated encryption as a first-class citizen to the most widely deployed NoSQL database in the world. Insights from simple use cases of small stand-alone servers to some of the most demanding global distributed mission systems will be discussed. We'll review promising emerging cryptography and discuss the practical impact to developers and system designers.

Kenn White, MongoDB

Kenneth White is a security engineer whose work focuses on networks and global systems. He is co-founder and Director of the Open Crypto Audit Project and led formal security reviews on TrueCrypt and OpenSSL. He currently leads applied encryption engineering in MongoDB's global product group. He has directed R&D and security Ops in organizations ranging from startups to nonprofits to defense agencies to the Fortune 50. His work on applied signal analysis has been published in the Proceedings of the National Academy of Sciences. His work on network security and forensics has been cited by the Wall Street Journal, Reuters, Wired, and the BBC. He tweets about security, privacy, cryptography, and biscuits: @kennwhite.

Pre-Authentication Messages as a Common Root Cause of Cell Network Attacks

Monday, 3:00 pm3:30 pm

Yomna Nasser

Available Media

What do almost all recent cell network attacks that affect mobile user privacy have in common? They exploit the fact that cell phones have no way of authenticating towers during the initial connection bootstrapping phase. This includes everything from older IMSI catcher-style attacks to the newer spoofing attacks against the Presidential Alerts emergency broadcast system.

In this talk, we'll cover the distinct types of attacks that pre-authentication messages used in cell connection bootstrapping enable, how this ended up being such a prevalent issue, some of the efforts underway to try and fix this, and why this is ultimately such a hard problem to solve.

Yomna Nasser[node:field-speakers-institution]

Yomna is a research engineer whose focus is cell network security. She is a Technology Fellow at EFF, was previously a core contributor to Certbot, and a research fellow at Harvard Law, and has a degree in mathematics from the University of Waterloo.

3:30 pm–4:00 pm

Break with Refreshments

Grand Ballroom Foyer

4:00 pm–5:30 pm

Emerging Topics

Session Chair: Munish Walther-Puri, Presearch Strategy

Virtual Reality Brings Real Risks: Are We Ready?

Monday, 4:00 pm4:30 pm

Kavya Pearlman, XR Safety Initiative

Available Media

New technologies inevitably bring along new risks. Virtual Reality (VR) is one of those technologies that is slowly creeping into our daily digital lives, however, not much attention has been paid to the risks it brings along. As the industry looks towards mass adoption of Virtual Reality with an expected $40 billion market size and over 200 million active users by the year 2020, these new cyber attacks have already begun making headlines. Kavya Pearlman, founder of XR Safety Initiative is busy building processes, standards and finding novel cyberattacks to stay ahead of the bad guys that are coming for this rising new domain of Virtual Reality.

Kavya Pearlman, XR Safety Initiative

Well known as the "Cyber Guardian", Kavya Pearlman is an Award-winning cybersecurity professional with a deep interest in immersive and emerging technologies. Kavya is the founder of non-profit, XR Safety Initiative (XRSI). XRSI is the very first global effort that promotes privacy, security, ethics and develops standards and guidelines for Virtual Reality, Augmented Reality and Mixed Reality (VR/AR/MR) collectively known as XR.

Kavya has advised Facebook on third party security risks during 2016 US presidential elections. She currently advises Wallarm, a global security company that protects hundreds of customers across e-commerce, fintech, health-tech, and SaaS for their artificial intelligence powered application security platform as a Global Cybersecurity Strategist.

Kavya is constantly exploring new technologies to solve current cybersecurity challenges. She has been named one of the Top Cybersecurity influencers for two consecutive years (2018–2019) by IFSEC Global. Kavya has won many awards for her work and contribution to the security community including 40 under 40 Top Business Executives 2019 by San Francisco Business Times, Rising Star of the year 2019 by Women in IT Award Series and Minority CISO of the Year 2018 by ICMCP. For her work with XR Safety Initiative, Middle East CISO Council recently awarded her the CISO 100 Women Security Leader award in Dubai.

Kavya holds a master's degree in network security from DePaul University, Chicago, and holds many prestigious Information Security certifications, including CISM (Certified Information Security Manager) from ISACA, PCI-DSS-ISA (Internal Security Assessor) and PCIP for Payment Card Industry Security Standard Council. Kavya is truly passionate about her work and inspires many around the world, including women and underrepresented communities in security and emerging technologies. Kavya gives back to the tech community by mentoring women through the "Million Women Mentor" program and is a board of director for the non-profit "Minorities in Cybersecurity," as well as an advisory board member for "CISO Council North America."

What Does It Mean for Machine Learning to Be Trustworthy?

Monday, 4:30 pm5:00 pm

Nicolas Papernot, University of Toronto and Vector Institute

Available Media

The attack surface of machine learning is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security considerations in the design of ML algorithms. Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy.

Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like.

In this talk, we lay the basis of a framework that fosters trust in deployed ML algorithms. The approach uncovers the influence of training data on test time predictions, which helps identify poison in training data but also adversarial examples or queries that would potentially result in a leak of private information. Beyond immediate implications to security and privacy, we demonstrate how this helps interpret and cast some light on the model's internal behavior. We conclude by asking what data representations need to be extracted at training time to enable trustworthy machine learning.

Nicolas Papernot, University of Toronto and Vector Institute

Nicolas Papernot is an Assistant Professor of Electrical and Computer Engineering at the University of Toronto and Canada CIFAR AI Chair at the Vector Institute. His research interests span the security and privacy of machine learning. Nicolas received a best paper award at ICLR 2017. He is also the co-author of CleverHans, an open-source library widely adopted in the technical community to benchmark machine learning in adversarial settings, and TF Privacy, an open-source library for training differentially private models. He serves on the program committees of several conferences including ACM CCS, IEEE S&P, and USENIX Security. He earned his Ph.D. at the Pennsylvania State University, working with Professor Patrick McDaniel and supported by a Google Ph.D. Fellowship. Upon graduating, he spent a year as a research scientist at Google Brain.

How to Build Realistic Machine Learning Systems for Security?

Monday, 5:00 pm5:30 pm

Sadia Afroz, ICSI, Avast

Available Media

Given the existence of adversarial attacks and fairness biases, a question might arise if machine learning is useful for security at all. In this talk, we will discuss how to build robust machine learning systems to defend against real-world attacks. We focus on building machine learning-based malware detectors. We address the necessity of considering RoC curves where the FP rates need to lie well below 1%. Achieving this in the presence of a polluted ground truth set where 10–30% of data is unlabeled and 2–5% of labels are incorrect is a true challenge. When a dynamic model is built, testing it against a repository of malware is impossible, since most malware is ephemeral and may no longer exhibit the malicious property. Finally, we discuss how to model realistic adversaries for adversarial attacks and defenses.

Sadia Afroz, ICSI, Avast

Sadia Afroz is a research scientist at the International Computer Science Institute (ICSI) and Avast Software. Her work focuses on anti-censorship, anonymity, and adversarial learning. Her work on adversarial authorship attribution received the 2013 Privacy Enhancing Technology (PET) award, the best student paper award at the 2012 Privacy Enhancing Technology Symposium (PETS), and the 2014 ACM SIGSAC dissertation award (runner-up). More about her research can be found: http://www1.icsi.berkeley.edu/~sadia/

5:30 pm–7:00 pm

Conference Reception

The Atrium
Sponsored by Google

Tuesday, January 28, 2020

8:00 am–8:55 am

Continental Breakfast

Grand Ballroom Foyer

8:55 am–9:00 am

Opening Remarks, Day 2

Program Co-Chairs: Ben Adida, VotingWorks, and Daniela Oliveira, University of Florida

9:00 am–10:15 am

Panel

Browser Privacy: Opportunities and Tradeoffs

Tuesday, 9:00 am10:15 am

Moderator: Dr. Lea Kissner, Humu

Panelists: Justin Schuh, Google; Tanvi Vyas, Mozilla; Yan Zhu, Brave; Eric Lawrence, Microsoft

Available Media

In this lively panel, four browser privacy experts representing Brave, Edge, Firefox, and Chrome share their products' approaches toward enhancing privacy and describe their engineering efforts to avoid unintended consequences.

Justin Schuh, Google

Justin Schuh is an Engineering Director on Google Chrome Trust & Safety and has been a member of Chrome team for more than a decade. He's been working on security, privacy, and anti-abuse for 25 years across the public and private sectors, and co-authored The Art of Software Security Assessment.

Tanvi Vyas, Mozilla

Tanvi Vyas is a Principal Engineer at Mozilla where she is advocating for a more private web for all users. In this role, she leads the vision and development of privacy features in Firefox to provide protection against online tracking coupled with a good user experience. She is known for leading Firefox’s Enhanced Tracking Protection and creating an identity segregation system in Firefox called Containers. Most recently, Tanvi took a co-chair position in the new W3C Privacy Community Group, where she will help develop privacy-focused web standards and APIs. Before immersing herself in privacy, Tanvi focused on web application security as a Firefox engineer and as a Paranoid at Yahoo.

Yan Zhu, Brave

Yan is Brave's CISO and has been doing security engineering there for the last 4 years. In the past, she has also worked on privacy-enhancing browser extensions at the Electronic Frontier Foundation and served on the W3C Technical Architecture Group.

Eric Lawrence, Microsoft

Eric Lawrence is a Program Manager on the Microsoft Edge team working on Networking and Privacy features. He’s been working on websites and web browsers since 1999 and has had previous stints on the Security teams for Internet Explorer and Google Chrome.

10:15 am–10:45 am

Break with Refreshments

Grand Ballroom Foyer

10:45 am–12:15 pm

An Alternative Lens

Session Chair: Amit Elazari, University of California, Berkeley

Data as a Social Science: Cultural Elements in Privacy and Security

Tuesday, 10:45 am11:15 am

Annalisa Nash Fernandez, Intercultural Strategist

Available Media

Privacy and security are cultural constructs. We process and interpret them differently depending on our cultural framework. As technology yields unprecedented access across borders, frameworks designed for a few markets are ultimately deployed globally. Yet we still face linguistic and cultural barriers. Explore the geo-cultural dimensions of privacy, security, and communication that frame the global data frontier, unlocking global innovation in products and on multicultural teams. Understand the cultural values associated with approaches to trust, timing, change management, and data privacy and security. This engaging and informative presentation decodes how cultural differences present themselves on multicultural teams and in cross-border business transactions as challenges, but provide the opportunity for innovation and global excellence.

Annalisa Nash Fernandez, Intercultural Strategist

Annalisa Nash Fernandez is a specialist in world cultures, focusing on cultural elements in technology and business strategy. An experienced corporate strategic planning director who worked globally as an expatriate executive based in emerging markets, she bridges her dual background as a sociolinguist to navigate the cultural elements in digital communication, privacy, artificial intelligence, and the digital economy. Her expert quotes are featured widely, including by CIO magazine and the BBC, and her articles are published in trade journals and in leading media. Annalisa held various roles at Philip Morris International and Kraft Foods, based in São Paulo, Brazil, and investment banks, including Bankers Trust, based in New York City and Santiago, Chile. In her freelance consulting career she is a linguist for Transperfect, an intercultural strategist for multinational companies, a speaker at global conferences, and a pro bono interpreter and advocate. Annalisa holds an M.A. in language and translation from the University of Wisconsin, and a B.S. in international finance from Georgetown University.

All Security Is Good(s): Design Guidance for Economics

Tuesday, 11:15 am11:45 am

L Jean Camp, Indiana University

Available Media

Why don't people use security, protect their data, or adopt privacy-enhancing technologies? Is it that people don't care? Or people don't understand security and privacy? Is it a question of usability? Or is it a combination of all three? Individuals may rationally choose not to invest in security to benefit others, may underestimate their own risks, and may simultaneously find solutions to be unusable.

The solution to the lack of adoption of security (and the corresponding privacy paradox) depends upon the research thread one follows. For a classic economist, privacy is means a less efficient market. Given that market efficiency is contingent on more information, individuals are rationally unconcerned; the value from information sharing outweighs the costs of privacy loss. Thus, the solution is to ensure that the value of the information being transacted is realized by the individual.

Economics of security is often empirical and analytical, addressing the cost of crime and amounts of business. Economics of security is also focused on incentive-aligned design where the person investing in security obtains the benefit. Earlier work addressed the conversion of economic information into goods; for example, creating markets for vulnerabilities.

In this presentation, I focus on the economic component of failures of adoption and acceptability in security. I will provide references to the research that addresses these dimensions in-depth. I will include specific examples of both successes and failures.

L Jean Camp, Indiana University

Jean Camp is a Professor at the School of Informatics and Computing at Indiana University. She joined Indiana after eight years at Harvard's Kennedy School where her courses were also listed in Harvard Law, Harvard Business, and the Engineering Systems Division of MIT. She spent the year after earning her doctorate from Carnegie Mellon as a Senior Member of the Technical Staff at Sandia National Laboratories. She began her career as an engineer at Catawba Nuclear Station and with a MSEE at University of North Carolina at Charlotte. Her research focuses on the intersection of human and technical trust, levering economic models and human-centered design to create safe, secure systems. She has authored more than two hundred publications. She has peer-reviewed publications on security and privacy at every layer of the OSI model. She has alumni in the private, public, and nonprofit sectors. She is a Fellow of the Institute of Electrical and Electronic Engineers, as well as a Fellow of the American Association for the Advancement of Science.

Platform Data Privacy & Security Strategies for Public Interest Research

Tuesday, 11:45 am12:15 pm

Steven Buccini, Aspen Institute Tech Policy Hub

Available Media

Our presentation outlines several state-of-the-art technical strategies to enable data access for public interest research while complying with privacy regulations like the EU General Data Protection Regulation. Platforms often hold large-scale, high-quality datasets that researchers cannot compile on their own. While GDPR contains exemptions intended to allow platforms to share data with third-party researchers, regulatory "gray zones" that exist within the law—including the concept of data "anonymity," the role and obligation of so-called "data controllers" in public interest research, and the standards for informed consent—are hindering the sharing of substantive datasets. We examine technical strategies being considered to deal with these ambiguities while maintaining user privacy and control, discuss where these strategies are useful and where they fall short, and what challenges still need to be solved. Finally, we propose a set of potential industry standards, both technical and philosophical, that companies, researchers, and users around the world can employ to ensure the privacy and security of data for public interest research.

Steven Buccini, Aspen Institute Tech Policy Hub

Steven Buccini is a fellow at the Aspen Institute Tech Policy Hub where he investigated GDPR-compliant data sharing partnerships, fought to make North Carolina's voting machines more secure, and worked to protect seniors online. Previously, Steven worked as a software engineer for several companies based in the Bay Area before moving back to his hometown to run for the North Carolina House of Representatives. He earned his Bachelors in Electrical Engineering and Computer Science from UC Berkeley. He has sampled every BBQ spot in San Francisco and holds very strong positions in the never-ending debate on the merits of Eastern- vs Western-style Carolina pulled pork BBQ.

12:15 pm–1:30 pm

Lunch

The Atrium
Sponsored by Ethyca

1:30 pm–3:30 pm

Privacy Engineering

Session Chair: Heather Adkins, Google

How Anonymous Is My Anonymized Data?

Tuesday, 1:30 pm2:00 pm

Matt Bishop, Department of Computer Science, University of California, Davis

Available Media

Data anonymization focuses on hiding specific fields of records. Adversaries, however, view the records as a collection of fields and see what they can glean from the unanonymized fields that will impart information about the anonymized fields. In reality, the problem is one of relationships—which relationships can be exploited to reveal anonymized information. There is always some external information that enables the relationships to be uncovered. This talk examines the question of relationships and their role in anonymizing and deanonymizing data, and treat this as a problem of risk—can the adversaries characterize that external data and find it?

Matt Bishop, Department of Computer Science, University of California, Davis

Matt Bishop received his Ph.D. from Purdue University in 1984, and is a Professor of Computer Science at the University of California at Davis. His main research area is the analysis of vulnerabilities in computer systems, and works on data sanitization, elections and e-voting systems, policy, formal modeling, the insider threat, and computer and information security education. He co-led the California Top-to-Bottom Review of electronic voting systems certified in California, and also co-led the Joint Task Force that developed the ACM/IEEE/ASIS SIGSAC/IFIP WG10.8 Cybersecurity Curricular Guidelines. The second edition of his textbook, "Computer Security: Art and Science", was published in November 2018 by Addison-Wesley Professional. Among other topics, he teaches programing and computer security.

Stop Failing. Start Building for Humanity.

Tuesday, 2:00 pm2:30 pm

Dr. Lea Kissner, Humu

Available Media

We live in a world of failure and I think we're all heartily sick of it. The systems we build hurt people, both when they work as designed and when they break. Some of those failures are because of bugs, some because of design flaws, but so many of our failures are because we didn't build for the complex spectrum which constitutes actual humanity. People are not all the same. They don't have the same desires or needs or threats. We, as security and privacy professionals, are not living up to our ethical obligations when we fail to build with respect for humans. It requires more comfort with ambiguity. It requires putting aside purity. But this is how we can be truly effective.

Lea Kissner, Humu

Lea is the Chief Privacy Officer of Humu. She works to build respect for users into everything that Humu does, such as product design, privacy-enhancing infrastructure, application security, and novel research into both theoretical and practical aspects of privacy. She was previously the Global Lead of Privacy Technology at Google, working for over a decade on projects including logs anonymization, infrastructure security, privacy infrastructure, and privacy engineering. She earned a Ph.D. in computer science (with a focus on cryptography) at Carnegie Mellon University and a B.S. in electrical engineering and computer science from UC Berkeley.

Privacy at Speed: Privacy by Design for Agile Development at Uber

Tuesday, 2:30 pm3:00 pm

Dr. Engin Bozdag, Uber

Available Media

The concept of privacy by design (PbD) is more than 20 years old and a common element in both regulatory and technical discussions. While many strategies for Privacy by Design focuses on product development with a traditional waterfall-style methodology, today's current agile development process does not follow the historically clear cut and distinct design, planning, implementation, and release phases. Many privacy risk mitigation strategies are created for the waterfall-style methodology and focus on the planning phase. The implementation phase consists of taking the planned actions with the hopes that they are enough to avoid the identified risks.

In an agile methodology, software is released in an iterative and feedback-driven fashion, which emphasizes short development cycles, continuous testing, user-centricity and greater simplicity of design. Agile programming practices allow developers across services to continuously tweak, remove or add new features using "build-measure-learn" feedback loops. This includes experimental features, minimum viable products, and alpha releases. While agility requires quick software development sprints, privacy analysis is usually a slow and time-consuming activity. In addition, technical privacy assessments are based on the architectural description of the system, but in agile development, there is often no grand design upfront and the documentation is limited. It might be possible to assess the privacy readiness of each feature, but when these features are combined, there is no guarantee that the service itself or the entire supply chain that underlies it fulfills all the privacy requirements. The latter is the case due to modular micro-service oriented architectures that are favored in current-day software ecosystems.

In this talk, we will demonstrate an approach to technical privacy where privacy by design is applied in a hyper-connected service environment. We will walk through some of the principles coming from GDPR, industry standards such as ISO29100 and Data Protection Authority guidelines. We will also demonstrate how those principles can be applied to a complex agile environment.

Engin Bozdag, Uber

Engin is a senior privacy architect at Uber and leads the technical privacy review process to ensure privacy is embedded into products and services as early as possible. Prior to Uber, Engin worked for health tech leader Philips and led their technical GDPR implementation program. Engin holds a Ph.D. degree in algorithmic bias and technology ethics and an M.S. in software engineering both from Delft University of Technology, the leading technical university of the Netherlands and one of the leading engineering schools in the world. Engin is a member of the ISO/PC 317 Working Group working to create a global standard on Privacy by Design. Engin is also affiliated with 4TU Centre for Ethics & Technology (the major research center in the Netherlands on technology ethics) and also a regular guest lecturer for Delft University of Technology.

The Browser Privacy Arms Race: Which Browsers Actually Protect Your Privacy?

Tuesday, 3:00 pm3:30 pm

Andrés Arrieta, Electronic Frontier Foundation

Available Media

Web browsers are finally starting to take privacy seriously. Almost every major browser has now announced a privacy initiative, but which ones are serious and which ones are snake oil? Are any of the alternative browsers like Brave or Tor Browser serious contenders? Do browser privacy protections on desktop differ from mobile? In this talk, we'll look at a high-level overview of the technical details behind the major browsers' privacy pushes, and cut through the techno-jargon to see which browsers are actually trying to protect your privacy, and which are just pretending.

Andrés Arrieta, Electronic Frontier Foundation

Andrés is Director of Consumer Privacy Engineering for the Electronic Frontier Foundation, where he oversees projects like blocking trackers online when you browse, pushing policy for better privacy, and helping in digital forensics investigations.

A Telecom and Electronics Engineer, he previously worked for Telecommunications companies and Mobile Operators developing projects from the radio and core networks to IT systems. Seeing the state of privacy in the digital world from previous experiences, he joined the EFF to help develop tools that address these issues and to push for better legislation that protects us all and considers marginalized communities.

3:30 pm–4:00 pm

Break with Refreshments

Grand Ballroom Foyer

4:00 pm–5:30 pm

Vulnerable Populations

Session Chair: Alex Smolen, Clever

Public Records in the Digital Age: Can They Save Lives?

Tuesday, 4:00 pm4:30 pm

Kathryn Kosmides, Founder, CEO of Garbo.io

Available Media

We want to open the door for a conversation on what exactly a public record is in today's digital age and how they can be used to prevent crimes, with an emphasis on gender-based crimes like sexual assault, domestic violence, and sexual harassment.

Kathryn Kosmides, Founder, CEO of Garbo.io

Kathryn Kosmides is the founder and CEO of Garbo.io, a nonprofit that provides access to data that prevents domestic violence, sexual assault, and other crimes against vulnerable populations while holding systems and individuals accountable.

Eyes in Your Child's Bedroom: Exploiting Child Data Risks with Smart Toys

Tuesday, 4:30 pm5:00 pm

Sanchari Das, Ph.D. Candidate and Information Security Engineer

Available Media

The Internet of Things (IoT) is a phenomenon that has penetrated the global market in virtually all devices capable of connecting to the internet. Smart Toys are one such emerging device that enables one to have the toy experience and also provide various internet features, such as playing and interacting with one's child. Worldwide, smart toy sales in 2017 reached 5 billion and are expected to exceed 15 billion by 2022 by the IoT marketplace in 2017. Though useful, exposure to the internet also provides exposure to risks and vulnerabilities. Due to a lack of common knowledge of IoT functionality, home IoT devices pose a serious concern for users across the world. Risks are especially concerning for parents in the protection of their families' privacy and security.

Our research investigates smart toy vulnerabilities and performs penetration testing on toy products, presents a summary of the risks & vulnerabilities, and provides users employable mitigation practices to secure the private spaces, data, and members of their home. A Smart Toy was selected as a demonstration model due to its popularity among younger audiences, its brand trust among parents, and its design decisions that make it an overpowered and under-protected target. Acting as attackers, we were able to gain root access to the device, gain access to take pictures, record videos, create 30 GB of hidden storage space, as well as add software for remote control of the device or any other android based application for port scanning, emailing, or other network attacks. Additionally, we changed gameplay to inappropriate games intended to steal credit card data or other sensitive data through the child owner who is told it is all a game. All attacks function without the user knowing that their device has been compromised. As a defense mechanism, we have both developed a user educated threat model for home-based self-mitigation as well as offering actionable recommendations to the manufacturer in order to make the device safer through both—two software update options and one physical modification.

Sanchari Das, Ph.D. Candidate and Information Security Engineer

Sanchari Das is a Ph.D. Candidate in the School of Informatics, Computing, and Engineering at Indiana University Bloomington. A security track researcher, her research interests include multi-factor authentication, usable security and privacy, user experience, social media research, third party privacy, user risk perception, online harassment, risk communication, and human-computer interaction.

Currently working for American Express as an Information Security Engineer and Project Manager for the Identity and Access Management Team (Identity Services), she also took the role of a Global Privacy Adviser at XRSI.org. She has also presented her research work at several conferences such as RSA, BlackHat, Financial Cryptography, HAISA, SOUPS, SM&S.

She has received dual Masters degrees from Jadavpur University, Kolkata, India (Computer Applications) and Indiana University Bloomington (MS in Informatics). She received her Bachelors from The Heritage Academy, Kolkata, India and was a Gold-medalist in her cohort.

She has also previously worked at Infosys Limited and HCL Technologies.

Next-Generation SecureDrop: Protecting Journalists from Malware

Tuesday, 5:00 pm5:30 pm

Jennifer Helsby, Freedom of the Press Foundation

Available Media

SecureDrop is a whistleblowing platform originally created in 2012 for journalists to accept leaked documents from anonymous sources. It's now currently in use by dozens of news organizations including NBC News, The Washington Post and The New York Times. The goals of the project are to (1) protect the identity of sources while also to (2) provide a secure environment for journalists to read documents and respond to sources. This talk is about is a new QubesOS-based (Xen) workstation for journalists and other users who need to open potentially malicious documents. The threat of journalists opening malware being submitted through a SecureDrop server is handled via compartmentalization, i.e. opening each potentially malicious document in a separate VM. As journalists are increasingly facing attacks—including those we've observed attempting to phish people through SecureDrop—this can make it significantly safer for them to work with source materials.

Jennifer Helsby, Freedom of the Press Foundation

Jennifer Helsby (@redshiftzero) has been Lead Developer of SecureDrop at Freedom of the Press Foundation (FPF) since 2017. Prior to joining FPF, she was a postdoctoral researcher at the Center for Data Science and Public Policy at the University of Chicago. Jennifer is also a co-founding member of Lucy Parsons Labs, a non-profit that focuses on police accountability and surveillance oversight.

5:30 pm–7:00 pm

Conference Reception

The Atrium
Sponsored by Netflix

Wednesday, January 29, 2020

8:00 am–8:55 am

Continental Breakfast

Grand Ballroom Foyer

8:55 am–9:00 am

Opening Remarks, Day 3

9:00 am–10:15 am

Panel

Disinformation

Wednesday, 9:00 am10:15 am

Moderator: Andrea Limbago, Virtru

Panelists: Renee DiResta, New Knowledge and Data for Democracy; Melanie Ensign, Uber

Available Media

When in Doubt: The State of Informational Trust and Integrity
It is increasingly difficult to assess information and data integrity. While troll farms garner most of the attention, there is a global proliferation of actors leveraging disinformation tactics to achieve a wide variety of objectives. This panel will analyze the current landscape of actors, tactics, and targets, dispelling many of the growing misperceptions about disinformation while exploring the implications of the real and prominent threat to trust and data integrity. Through analysis of several campaigns, we will also offer insights and tips on how to spot disinformation, and discuss how the security and privacy community can play a role in revitalizing data integrity and subsequently the political, economic, and geopolitical systems which rely on it.

10:15 am–10:45 am

Break with Refreshments

Grand Ballroom Foyer

10:45 am–12:45 pm

Governing

Session Chair: Caroline Wong, Cobalt

Trustworthy Elections

Wednesday, 10:45 am11:15 am

Joey Dodds, Galois and Free & Fair

Available Media

It is discussed in the media every day. It is the focus of congressional investigations and DEF CON villages. It is a core concern of our nation. Our democracy must be resistant to adversarial influences, both domestic and foreign.

Hundreds of millions vote on outdated computers that no cybersecurity professional trusts. Tens of millions vote with no paper ballot record. Government agencies responsible for the correctness and security of election computing systems—primarily the Election Assistance Commission (EAC) and National Institute of Standards and Technology (NIST)—are under-resourced. Elected officials and electoral officials already have their plates full with IT challenges such as database management and ransomware attacks. The costs of recertification make voting system vendors hesitant to make significant changes to their products, especially if they don't see universal demand across their customer base.

These groups understand that things can be better, but they need help.

This talk will explain in plain language (i) how we got to where we are today in elections in the USA, (ii) the aspects of the elections systems landscape that make change difficult, and (iii) practical actions we can take to break this cycle.

We will describe what we are doing in the Microsoft ElectionGuard project and in the DARPA SSITH project to help create a new generation of trustworthy election technologies.

Joey Dodds, Galois and Free & Fair

Dr. Joey Dodds is a Principal Researcher at Galois and the co-founder of Free & Fair. Joey is leading the ElectionGuard project and is one of the core experts in the world on matters of trustworthy election technologies.

Internet Infrastructure Security: A Casualty of Laissez-Faire and Multistakeholderism?

Wednesday, 11:15 am11:45 am

Laurin B. Weissinger, Yale University

Available Media

It is time to reckon with the security implications of the laissez-faire approach that has dominated Internet regulation. Since the late 1980s, this US-led, hands-off approach has facilitated unprecedented technical innovation. Competition and technological progress have driven down the price of resources like hosting and domains. While cheaper prices do benefit everyday users, near-general availability and low prices have the unintended consequence of enabling the inevitable elements of the human condition that are often kept in check by law and regulations. In short, laissez-faire governance was reasonable for infrastructures used by a small group of expert users but now comes at the cost of real harm and threats to individuals, organizations, and society at large.

In this talk, we focus on the multi-stakeholder approach to governance of Internet domain names and addresses that in part results from this laissez-faire approach. While technically open to all, meaningful participation in multi-stakeholder fora like ICANN and standard-setting bodies has always required time and money. Naturally, large vested interests like corporations will be heavily involved in, and often try to steer, governance and policymaking concerning the processes on which their operating environment and profit margins depend. Less profit-driven stakeholders, including academics and independent researchers, consumer protection agencies and advocacy organizations, as well as civil society in general, have fewer resources and are thus less able to have their interests represented and thus have an equivalent impact on policy. Recently, tensions among key actors have risen, along with familiar but escalating criticism by both insiders and outsiders regarding the imbalanced representation of stakeholders, volunteer burnout, slow progress, high cost, and unscalable results of policy development.

Due to the technically open but heavily stratified nature of internet governance, goals like public security and safety have often been neglected, and their proponents struggle to tackle these issues through existing policy avenues. Furthermore, independent researchers or public interest bodies have difficulties when trying to comprehensively study end-user security, or the relationships between policy, organizational arrangements, pricing, costs, and abuse.

In the short term, we must recognize that the current lack of data and access undermines our understanding of the status quo, and thus inhibits possible preparations for a more secure "cyberfuture." In the medium term, we argue that these fora will have to be reorganized to provide a stronger voice to consumer protection interests, and the independent experts and researchers that support them. In the long term, we need the regulatory function—or at least some form of oversight—to be (financially) independent from the industry it regulates.

Laurin B. Weissinger, Yale University

Laurin Weissinger is a Lecturer in Law and the Cybersecurity Fellow at Yale Law School. He works on the problem of trust assurance in cybersecurity, covering both technical and socio-political questions, as well as cooperation in international and organizational cybersecurity. Laurin received his D.Phil. from the University of Oxford in 2018 and has over 15 years of work experience in IT. Much of his recent work focuses on policy questions related to internet security. He serves as a vice chair on ICANN's second Security, Stability, and Resiliency of the Domain Name System (SSR2) Review Team.

The State of the Stalkerware

Wednesday, 11:45 am12:15 pm

Eva Galperin, Electronic Frontier Foundation

Available Media

Last Spring, EFF's Eva Galperin challenged anti-virus companies to take action against stalkerware, a class of software that can be covertly installed on a device and used to spy on communications. Marketing of this software is often aimed at abuses spouses, who use it as a tool to harass, control, and spy on their victims. This talk will cover the latest developments in combating stalkerware, including efforts by security companies, tech giants, academics, and the people who are doing the difficult work of supporting victims of domestic abuse as they try to escape their abusers.

Cybercrime: Getting beyond Analog Cops and Digital Robbers

Wednesday, 12:15 pm12:45 pm

Mieke Eoyang

Available Media

Mieke Eoyang[node:field-speakers-institution]

As the Vice President for Third Way's National Security Program, Mieke Eoyang is committed to closing the credibility gap between Democrats and Republicans on security issues and crafting a national security strategy that is both tough and smart. She works on every major national security issue—from the details of military personnel policy to electronic surveillance laws—while still making time to mentor the next generation of women in national security. It's a lot to manage, but Mieke thrives on chaos—and on connecting people and ideas.

Mieke had a long career on Capitol Hill, most recently serving as Chief of Staff to Representative Anna Eshoo (D-CA). Prior to that, she was the Defense Policy Advisor to Senator Kennedy, the Subcommittee Staff Director on the House Permanent Select Committee on Intelligence, and a Professional Staff Member on the House Armed Services Committee. Mieke began her career as a legislative assistant in the office of Representative Pat Schroeder (D-CO), where she handled the congresswoman's Armed Services and Foreign Policy work.

Originally from Monterey, California, Mieke earned her J.D. at the University of California, Hastings College of the Law and graduated from Wellesley College. She frequently appears on MSNBC, and her analysis is often solicited by The Wall Street Journal, POLITICO, Associated Press, and other media outlets. Her writing has appeared in numerous media outlets, including The Washington Post, Roll Call, and Forbes.

12:45 pm–2:00 pm

Lunch

The Atrium

2:00 pm–3:30 pm

Preparing and Responding

Session Chair: Joe Calandrino, Federal Trade Commission

Adventures with Cybercrime Toolkits: Insights for Pragmatic Defense

Wednesday, 2:00 pm2:30 pm

Birhanu Eshete, University of Michigan, Dearborn

Available Media

When it comes to improving the state of defense in the cybercrime arms race, all too common advice is to be more proactive than reactive. However, close examination of the modus operandi of cybercriminals suggests a great deal of their pragmatism and adaptability to defensive moves. Among other blindspots, exploitable opportunities pursued by cybercriminals typically stem from flaws in the design, implementation, configuration, and deployment of systems. In essence, cybercriminals monetize these blindspots to stir the arms race in their favor.

Using a multi-faceted analysis of pre-packaged cybercrime tools called exploit kits, this talk argues and illustrates that defenders should as well be pragmatic and adaptive enough to turn the weakest links of cybercriminals into concrete opportunities to counter cybercrime. We use the exploit kit phenomenon to highlight how defenders could combine reactive, proactive, and offensive strategies towards pragmatic defense.

On the reactive front, we describe how seemingly simple yet identifying configuration and deployment artifacts are used to identify active exploit kits in the wild. On the offensive side, we illustrate how access to exploit kits source code is leveraged towards an automated infiltration and legally authorized takedown of live exploit kits. On the proactive front, we highlight how lessons learned from reactive and offensive strategies are combined toward real-time threat detection. The talk leaves the audience with key takeaways on pragmatic defense strategies in the face of an adaptive cybercriminal with motives and means.

Birhanu Eshete, University of Michigan, Dearborn

Birhanu Eshete is an Assistant Professor of Computer Science at the University of Michigan, Dearborn, where he leads the Data-Driven Security and Privacy Lab. Prior to that, he was a Postdoctoral Researcher in the Systems and Internet Security Lab at the University of Illinois at Chicago. His research focuses on cybercrime analysis, cyber threat intelligence, and adversarial machine learning. His work on automated exploit generation received the distinguished paper award at the 2018 USENIX Security Symposium. The same work was one of the finalists in the 2018 NYU Applied Research Competition across the United States and Canada. Birhanu holds a Ph.D. degree in Computer Science from the University of Trento, and M.S. and B.S. in Computer Science from Addis Ababa University.

Reservist Model: Distributed Approach to Scaling Incident Response

Wednesday, 2:30 pm3:00 pm

Swathi Joshi, Netflix

Available Media

Scaling incident response is inherently hard. Incidents happen in waves and have sporadic surges. In 2018, we witnessed this first hand with a "December to Remember," where on average each responder had to deal with multiple incidents across different time zones. In an ideal world you have a large Incident Response team on standby, but hiring enough to match the occasional surge is expensive and impractical. How do you manage the demand without adding a massive headcount?

In this talk, I will describe how we have approached this problem at Netflix: a complex environment with a small incident response team and growing needs. I will delve into how we created the Reservist Program, a pool of auxiliary Crisis Managers that supplement our security incident response function. At the end of the talk, the audience will be equipped to build their own program with simple steps.

Swathi Joshi, Netflix

Swathi Joshi leads Netflix's Detection and Response team which focuses on managing the inevitable security incidents that arise and building detection pipelines to minimize risk to Netflix. Prior to Netflix, she was an Engagement Manager and Escalations Manager at Mandiant/FireEye, helping companies defend against Advanced Persistent Threats (APT). Swathi was born in Mangalore, India. She received her Master's degree in Information Security and Assurance from George Mason University and sits on the board of https://sdie.org.

The Abuse Uncertainty Principle, and Other Lessons Learned from Measuring Abuse on the Internet

Wednesday, 3:00 pm3:30 pm

David Freeman, Facebook

Available Media

Fighting spam, phishing, and other forms of abuse on the internet is typically seen as a detection problem: find signals that will identify the bad guys and then use these signals to block them. In this talk, I argue that the most difficult part of fighting abuse is not detecting and blocking the bad guys—it's figuring out whether they're there in the first place. What's the "background level" of spam and fake accounts? How can we figure out what our detection systems are missing? Which abuse problem is the most important one to work on right now?

In this talk, I will show how good measurement of abuse unlocks both prioritization of work and analysis of impact. I will present several approaches that Facebook's integrity teams have used to measure and prioritize their problems, including user reports, human labeling, and automated labeling, and offer scenarios in which each of these should and shouldn't be used.

I will also introduce the "Abuse Uncertainty Principle" which says that you can use a metric for measurement or detection, but not both. The Uncertainty Principle implies that measurement is never a finished project, but I will offer strategies for ensuring that your metrics are good enough to inform key decisions. Armed with these tools, you can go back to your product and find out how much abuse it's attracting, how good you are at stopping it, and where you need to invest next.

David Freeman, Facebook

David Freeman is a research scientist/engineer at Facebook working on integrity problems, with a particular focus on fake engagement, scraping, and automation detection. He previously led anti-abuse engineering and data science teams at LinkedIn. He is an author, presenter, and organizer at international conferences on machine learning and security, such as Enigma, NDSS, WWW, and AISec, and has written (with Clarence Chio) a book on Machine Learning and Security published by O'Reilly. He holds a Ph.D. in mathematics from UC Berkeley and did postdoctoral research in cryptography and security at CWI and Stanford University.

3:30 pm

Closing Remarks

Program Co-Chairs: Ben Adida, VotingWorks, and Daniela Oliveira, University of Florida