Enigma 2022 Conference Program

All the times listed below are in Pacific Standard Time (PST).

Attendee Files 
Enigma 2022 Attendee List (PDF)

Tuesday, February 1, 2022

7:30 am–9:00 am

Continental Breakfast

9:00 am–9:15 am

Opening Remarks, Day 1

Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter

9:15 am–10:45 am

Disinformation

Session Chair: Kate McKinley, Woven Planet

#Protect2020: An After Action Report

Tuesday, 9:15 am9:45 am

Chris Krebs, Founding Partner, KS Group

Available Media

Chris Krebs, Founding Partner, KS Group

Chris Krebs served as the first Director of the U.S. Cybersecurity and Infrastructure Security Agency. With a long career as a cyber-policy expert in the private and public sector, Chris builds coalitions to address today's and tomorrow's challenges.

Around the World in 500 Days of Pandemic Misinformation

Tuesday, 9:45 am10:15 am

Patrick Gage Kelley, Google

Available Media

The Covid-19 pandemic has given us a unique opportunity to investigate how misinformation narratives spread and evolve around the world. Throughout 2020 and 2021, we conducted regular surveys of over 50 thousand people from a dozen countries about their self-reported exposure to pandemic-related misinformation and their belief in those narratives. This large scale, longitudinal measurement provides a unique lens to understand how misinformation narratives resonate throughout the world, how the belief in these narratives evolves over time, and how ultimately misinformation affects personal health decisions such as vaccination. In this talk, we will share the key insights gleaned throughout this study that in turn help inform efforts to fight multiple types of misinformation.

Patrick Gage Kelley, Google

Patrick Gage Kelley is a Trust & Safety researcher at Google focusing on questions of security, privacy, and anti-abuse. He has worked on projects on the use and design of standardized, user-friendly privacy displays, passwords, location-sharing, mobile apps, encryption, and technology ethics. Patrick’s work on redesigning privacy policies in the style of nutrition labels was included in the 2009 Annual Privacy Papers for Policymakers event on Capitol Hill. Most recently, Apple and Google revived this work with their App Privacy Labels. Previously, he was a professor of Computer Science at the University of New Mexico and faculty at the UNM ARTSLab and received his Ph.D. from Carnegie Mellon University working with the Mobile Commerce Lab and the CyLab Usable Privacy and Security (CUPS) Lab. He was an early researcher at Wombat Security Technologies, now a part of Proofpoint, and has also been at NYU, Intel Labs, and the National Security Agency.

Can the Fight against Disinformation Really Scale?

Tuesday, 10:15 am10:45 am

Gillian "Gus" Andrews, Theorem Media and Front Line Defenders

Available Media

The past few years have seen a surge of interest and funding in fighting disinformation. Rumors and conspiracy theories have disrupted democratic process from Brazil to India, to the halls of Congress in the United States; they have hobbled the success of the fight against COVID. Many proposed solutions hinge either on "fact-checking" or on using AI to identify and defuse disinformation on a large scale.

We can try to scale the fight against disinformation with machine learning. But what is it that we are trying to scale? Are we certain that hearts and minds can meaningfully be changed at scale? What would that effort look like?

This talk will challenge a key assumption currently made in fighting disinformation: that "trustworthiness" is a property of information, not of the people who spread it, and that trust is a human quality that can be generated at scale. Dr. Andrews will lay out findings from science and technology studies, neurocognitive development, and "new literacies" research to point to best practices and new approaches to the disinformation problem.

Gillian "Gus" Andrews, Theorem Media and Front Line Defenders

Dr. Gillian "Gus" Andrews is a public educator, writer, and researcher whose work focuses on information literacy, security, and privacy. She is known on the cybersecurity speaking circuit for posing thought-provoking questions about the human side of technology. Dr. Andrews’s policy research on cybersecurity has informed work at the US State Department and the Electronic Frontier Foundation, and she has been a regularly invited speaker to cadets and staff at the Army Cyber Institute at West Point. Her book, Keep Calm and Log On (MIT Press), offers practical everyday advice for protecting privacy and security online, as well as avoiding disinformation.

10:45 am–11:15 am

Break with Refreshments

11:15 am–12:45 pm

Humans Are Hard

Session Chair: Antonela Debiasi, The Tor Project

Thinking Slow: Exposing Influence as a Hallmark of Cyber Social Engineering and Human-Targeted Deception

Tuesday, 11:15 am11:45 am

Mirela Silva, University of Florida

Available Media

Use of influence tactics (persuasion, emotional, gain/loss framing) is key in many human interactions, including advertisements, written requests, and news articles. However, they have been used and abused for cyber social engineering and human-targeted attacks, such as phishing, disinformation, and deceptive ads. In this emerging deceptive and abusive online ecosystem, important research questions emerge: Does deceptive material online leverage influence disproportionately, compared to innocuous, neutral texts? Can machine learning methods accurately expose the influence in text as part of user interventions to prevent them from being deceived by triggering their more analytical thinking mode? In this talk, I present my research on Lumen (a learning-based framework that exposes influence cues in texts) and Potentiam (a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news). Potentiam was labeled by multiple annotators following a carefully designed qualitative methodology. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.

Mirela Silva, University of Florida

Mirela Silva is an NFS SFS scholar and a PhD Candidate at the University of Florida under the tutelage of Dr. Daniela Oliveira. Her research interests include interdisciplinary computer privacy, focused heavily on the intersection of cyber deception and cyber abuse through the lens of phishing susceptibility, disinformation, online advertising, and marginalized communities. Over the course of her doctorate studies, she has helped organize behavioral experimental studies and analyzed the nuances of the human-centric role of both phishing and disinformation.

Burnout and PCSD: Placing Team At Risk

Tuesday, 11:45 am12:15 pm

Chloé Messdaghi, Cybersecurity Disruption Consultant and Researcher

Available Media

From the pandemic, we have changed and transformed in ways we are still trying to discover. The effects have caused incredible burnout amongst colleagues and personal relationships, and has in ways, impacted managers, teams, and company structure and policies. It is not just burnout. We have another deeper issue that is becoming prevalent, Post-COVID Stress Disorder (PCSD). As an industry, we need to be aware of the seriousness of burnout, and recognize the role we play in mental health. This talk discusses burnout, what that means for security and the well-being of companies, and solutions to support one another as we proceed into a new era post-pandemic.

Chloé Messdaghi, Cybersecurity Disruption Consultant and Researcher

Chloé Messdaghi is a changemaker who focuses on innovating tech and information security sectors to meet today and tomorrow demands. For over 10 years, she has provided impactful solutions that empower organizations, products, and people to stand out from the crowd. Her work has earned her many distinctions, including being listed as one of the Business Insider’s 50 Power Players of Cybersecurity. Chloé is a trusted source for national and sector reporters and editors, as well as her research, op-eds, and commentary have been featured in numerous outlets, from Forbes and Business Insider to Bloomberg, and TechRepublic. She is a seasoned public speaker at major conferences, conventions, forums, and corporate events organized by industry associations and Fortune 500 companies. She serves or has served on several advisory groups, boards of directors, and nonprofit boards of trustees. Outside of her work, she continues to roll up her sleeves for equity and rights as the co-founder of Hacking is NOT a Crime and We Open Tech. She also co-runs the Open Tech Pledge project to help increase the representation of marginalized persons in leadership positions. She provides a weekly advice column for Ask Chloé on Security Boulevard, and the podcast host for ITSP Magazine's The Changemaking Podcast. She holds a Master of Science from The University of Edinburgh, and a BA in International Relations from University of California, Davis, as well as executive education certificates from Wharton and Cornell. Learn more: http://www.chloemessdaghi.com

Leveraging Human Factors to Stop Dangerous IoT

Tuesday, 12:15 pm12:45 pm

Dr. Sanchari Das, University of Denver

Available Media

Even the largest enterprise can be subverted with a small device quietly tunneling through the network boundaries. One way to mitigate the damage is to purchase the higher quality IoT devices, to increase security before installation. In this work, we evaluated the purchase of a few devices that appear relatively harmless but create significant risk. Any workplace may have a small crockpot show up in the break room, or an employee with a fitness tracker. These may offer access to all Bluetooth Low Energy (BLE) devices, or real time audio surveillance. Alternative models of the same devices, without the corresponding risk, show the value of careful IoT selection. Yet an employee can not be expected to understand the security risk of IoT devices. To address this understanding and motivation gap, we present a security-enhancing interaction that provides an effective, acceptable, usable framing for non-technical people making IoT purchase decisions. The interface design nudges users to make risk-averse choices by integrating psychological factors in the presentation of the options. Participants using this purchasing interaction consistently avoided low security and high risk IoT products, even paying more than twice ($6.99 versus $17.95) to select a secure smart device over alternatives. We detail how the nudges were designed, and why they are effective. Specifically, our Amazon store wrapper integrated positive framing, risk communication, and the endowment effect in one interaction design. The result is a system that significantly changes human decision-making, incorporating security the default choice. This was a collaboration between Prof. Sanchari Das at the University of Denver with Shakthidhar Gopavaram and Prof. L. Jean Camp at Indiana University Bloomington.

Sanchari Das, University of Denver

Dr. Sanchari Das is an Assistant Professor at the department of Computer Science in the Ritchie School of Engineering and Computer Science at University of Denver. Her research lab - Inclusive Security and Privacy-focused Innovative Research in Information Technology (INSPIRIT) Lab focuses on computer security, privacy, education, human-computer interaction, social computing, accessibility, and sustainability of new-age technologies. She received Ph.D. from Indiana University Bloomington under the supervision of Dr. L. Jean Camp. Her dissertation focused on understanding users' risk mental models to help in secure decision-making for authentication technologies. She has also worked on projects related to social media privacy, privacy policies, the economics of security, IoT device security, electronic waste security, the security of AR/VR/MR devices, and others. Additionally, she is also working as a User Experience Consultant for the secure technologies at Parity Technology and worked as a Global Privacy Adviser at XRSI.org. She completed Masters in Security Informatics from Indiana University Bloomington, Masters in Computer Applications from Jadavpur University, Bachelors in Computer Applications from The Heritage Academy. Previously, She has worked as a Security and Software Engineer for American Express, Infosys Technologies, and HCL Technologies. Her work has been published in several top-tier academic venues, including CHI, FC, SOUPS, etc. She has also presented at several security conferences, including BlackHat, RSA, BSides, Enigma (2019), and others. These works have also received media coverage in CNET, The Register, VentureBeat, PC Magazine, Iron Geek, and other venues.

12:45 pm–2:00 pm

Lunch

2:00 pm–3:30 pm

Hate and Encryption

Session Chair: Jon Callas, The Electronic Frontier Foundation

You Can’t Always Get What You Want / But You Get What You Need: Moderating E2EE Content

Tuesday, 2:00 pm2:30 pm

Mallory Knodel, Center for Democracy & Technology

Available Media

End-to-end encryption (E2EE) is an application of cryptography in online communications systems between endpoints. E2EE systems are unique in providing features of confidentiality, integrity and authenticity for users, yet these strong privacy and free expression guarantees create tension with legitimate needs for information controls. This talk proposes formal, feature- and requirement-based, and user centric definitions of end-to-end encryption that in aggregate are able to confront these tensions. Any improvements to E2EE should therefore strive to maximise the system's unique properties (confidentiality, integrity, authenticity), security and privacy goals, while balancing user experience through enhanced usability and availability. Concrete proposals for E2EE improvements were analysed thusly and the results will be presented. Improving mechanisms for user reporting and using existing metadata for platform abuse analysis are the most likely to preserve privacy and security guarantees for end-users, while also improving user experience. Both provide effective tools that can detect significant amounts of different types of problematic content on E2EE services, including abusive and harassing messages, spam, mis- and disinformation, and CSAM. Future research to improve these tools should measure efficacy for users while preserving E2EE systems’ unique guarantees.

Mallory Knodel, Center for Democracy & Technology

Mallory Knodel is the CTO at the Center for Democracy & Technology in Washington, DC. She is the co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force, co-chair of the Stay Home Meet Only Online working group of the IETF and an advisor to the Freedom Online Coalition. Mallory takes a human rights, people-centred approach to technology implementation and cybersecurity policy advocacy. Originally from the US, she has worked with grassroots organisations around the world. She has used free software throughout her professional career and considers herself a public interest technologist. She holds a B.S. in Physics and Mathematics and an M.A. in Science Education.

Content-Oblivious Trust and Safety Techniques: Results from a Survey of Online Service Providers

Tuesday, 2:30 pm3:00 pm

Riana Pfefferkorn, Stanford Internet Observatory

Available Media

In pressuring online service providers to better police harmful content on their services, regulators tend to focus on trust and safety techniques, such as automated systems for scanning or filtering content on a service, that depend on the provider's capability to access the contents of users' files and communications at will. I call these techniques content-dependent. The focus on content analysis overlooks the prevalence and utility of what I call content-oblivious techniques: ones that do not rely on guaranteed at-will access to content, such as metadata-based tools and users' reports flagging abuse which the provider did not (or could not) detect on its own.

This talk presents the results of a survey about the trust and safety techniques employed by a group of online service providers that collectively serve billions of users. The survey finds that abuse-reporting features are used by more providers than other techniques such as metadata-based abuse detection or automated systems for scanning content, but that the providers' abuse-reporting tools do not consistently cover the various types of abuse that users may encounter on their services, a gap I recommend they rectify. Finally, despite strong consensus among participating providers that automated content scanning is the most useful means of detecting child sex abuse imagery, they do not consider it to be nearly so useful for other kinds of abuse.

These results indicate that content-dependent techniques are not a silver bullet against abuse. They also indicate that the marginal impact on providers' anti-abuse efforts of end-to-end encryption (which, controversially, stymies providers' ability to access user content at will) can be expected to vary by abuse type. These findings have implications for policy debates over the regulation of online service providers' anti-abuse obligations and their use of end-to-end encryption.

Riana Pfefferkorn, Stanford Internet Observatory

Riana Pfefferkorn (she/her) is a Research Scholar at the Stanford Internet Observatory. She investigates governments' policies and practices for forcing decryption and/or influencing the security design of online platforms and services, devices, and products, both via technical means and through the courts and legislatures. A recovering lawyer, Riana also studies novel forms of electronic surveillance and data access by U.S. law enforcement and their impact on civil liberties.

Rethinking "Security" in an Era of Online Hate and Harassment

Tuesday, 3:00 pm3:30 pm

Kurt Thomas, Google

Available Media

While most security and anti-abuse protections narrowly focus on for-profit cybercrime today, we show how hate and harassment has grown and transformed the day-to-day threats experienced by Internet users. We provide a breakdown of the different classes of threats (such as coordinated mobs posting toxic content, anonymous peers breaking into a target’s account to leak personal photos, or intimate partner violence involving tracking and surveillance) and map these to traditional security or anti-abuse principles where existing solutions might help. We also provide prevalence estimates for each class of attack based on survey results from 22 countries and 50,000 participants. We find over 48% of people have experienced hate and harassment online, with a higher incidence rate among young people (18-24), LGBTQ+ individuals, and active social media users. We also highlight current gaps in protections, such as toxic comment classification, where differing personal interpretations of what constitutes hate and harassment results in uneven protections across users, especially at-risk populations. Our goal with this talk is to raise awareness of the changing abuse landscape online and to highlight the vital role that security practitioners and engineers can play in addressing these threats.

Kurt Thomas, Google

Kurt Thomas is a research scientist working at Google on the Security and Anti-Abuse Research team. His recent work focuses on mitigating online hate and harassment, personalizing security to individual users, automatically preventing account hijacking, and leveraging black market threat intelligence. His research has been covered in the New York Times, Wall Street Journal, WIRED, Bloomberg, and CNN. His work has been recognized by the IRTF Applied Networking Research Prize, Facebook Internet Defense Prize, and multiple Distinguished Paper Awards from IEEE Security & Privacy, USENIX Security, and the ACM CHI Conference on Human Factors in Computing Systems. Kurt completed his PhD in computer science at UC Berkeley in 2013.

3:30 pm–4:00 pm

Break with Refreshments

4:00 pm–5:00 pm

Make Attacks Hard

Session Chair: Swathi Joshi, Oracle

Detection Is Not Enough: Attack Recovery for Safe and Robust Autonomous Robotic Vehicles

Tuesday, 4:00 pm4:30 pm

Pritam Dash, University of British Columbia

Available Media

Autonomous Robotic Vehicles (RV) such as drones and rovers rely extensively on sensor measurements to perceive their physical states and the environment. For example, a GPS provides geographic position information, a gyroscope sensor measures angular velocities, an accelerometer measures linear accelerations. Attacks such as sensor tampering and spoofing can feed erroneous sensor measurements through external means that may deviate RVs from their course and result in mission failures. Attacks such as GPS spoofing have been performed against military drones and marine navigation systems. Prior work in the security of autonomous RVs mainly focuses on attack detection. However, detection alone is not enough, because it does not prevent adverse consequences such as drastic deviation and/or crash. The key question, "how to respond once an attack is detected in an RV?" still remains unanswered.

In this talk, we present two novel frameworks that provide safe response to attacks and allow RVs to continue the mission despite the malicious intervention. The first technique uses a Feed-Forward controller (FFC) which runs in tandem with RV’s primary controller and monitors it. When an attack is detected, the FFC takes over to recover the RV. The second technique identifies and isolates the sensor(s) under attack - this prevents the corrupted measurements from affecting the actuator signals. Then, it uses historic states to estimate RV’s current state and ensures stable operation even under attacks.

Pritam Dash, University of British Columbia

Pritam Dash is a Ph.D. student in Electrical and Computer Engineering at the University of British Columbia (UBC), Canada. Pritam's research focuses on the safety and security of autonomous systems. Specifically, analyzing vulnerabilities in sensing-perception modules, control systems, AI techniques, and mitigating them to ensure safety in autonomous systems. Pritam received master's degree in Electrical and Computer Engineering also from UBC. Before joining UBC, Pritam worked at IAIK, Graz University of Technology on projects related to identity management, privacy, and end-to-end confidentiality in cloud systems.

Teaching an Old Dog New Tricks: Reusing Security Solutions in Novel Domains

Tuesday, 4:30 pm5:00 pm

Graham Bleaney, Meta

Available Media

The security industry has spent decades building up tooling and knowledge on how to detect flaws in software that lead to vulnerabilities. To detect a breadth of vulnerabilities, these tools are built to identify general patterns such as data following from a source to a sink. These generalized patterns also map to problems in domains a diverse as performance, compliance, privacy, and data abuse. In this talk, I’ll present a series of case studies to show how Meta engineers have applied our security tools to detect and prevent implementation flaws in domains such as these.

I’ll go deep on a case study showing how static taint flow analysis —a tool Meta first deployed for security purposes— helped us make sure we weren’t storing or misusing user locations when we launched Instagram Threads. Then, to show that that case study was not an isolated example, I’ll more quickly walk through a half dozen additional examples where tools from our Product Security team have been used to check for implementation flaws in other domains. Finally, we’ll discuss the limitations of this approach, stemming from the tools themselves, differing organizational structures, and the ever-present need for defense in depth.

By the end of this talk, you should walk away brimming with ideas on new applications for your organization’s existing security tooling.

Graham Bleaney, Meta

Graham (@GrahamBleaney) is a Security Engineer at Meta. He focuses keeping Instagram and other Python codebases secure and private, through a mix of reviews, trainings, secure frameworks, and static analysis. He has previously spoken publicly about his work at PyCon 2021 and DEF CON 28.

5:00 pm–6:30 pm

Conference Reception

Sponsored by Google

Wednesday, February 2, 2022

7:30 am–9:00 am

Continental Breakfast

9:00 am–9:05 am

Opening Remarks, Day 2

Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter

9:05 am–10:35 am

Panel

Sex Work, Tech, and Surveillance

Wednesday, 9:05 am10:35 am

Moderator: Elissa M. Redmiles, Max Planck Institute for Software Systems

Panelists: Kendra Albert, Harvard Law School; Kate D'Adamo, Reframe Health and Justice; Angela Jones, State University of New York

Available Media

In this panel, four experts will discuss the influence of technology & policy on the livelihoods and wellbeing of sex workers. We will discuss the ever changing landscape of regulation and efforts to remove sex & sex workers from the internet and the role of digital security & privacy and the experts who develop technologies to preserve it.

Elissa M. Redmiles, Max Planck Institute for Software Systems

Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She has additionally served as a consultant and researcher at multiple institutions, including Microsoft Research, Facebook, the World Bank, the Center for Democracy and Technology, and the University of Zurich. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as the New York Times, Scientific American, Rolling Stone, Wired, Business Insider, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards from Facebook as well as the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.

Kendra Albert, Harvard Law School

Kendra Albert is a clinical instructor at the Cyberlaw Clinic at Harvard Law School, where they teach students to practice technology law by working with pro bono clients. Their practice areas include freedom of expression, computer security, and intellectual property law. Kendra also publishes on gender, adversarial machine learning, and power. They hold a law degree from Harvard Law School and serve on the board of the ACLU of Massachusetts and the Tor Project. They also are a legal advisor for Hacking // Hustling, a collective of sex workers, survivors, and accomplices working at the intersection of tech and social justice to interrupt state surveillance and violence facilitated by technology.

Kate D'Adamo, Reframe Health and Justice

Kate D‘Adamo is a sex worker rights advocate with a focus on economic justice, anti-policing and incarceration and public health. Previously, she was the National Policy Advocate at the Sex Workers Project and a community organizer and advocate with the Sex Workers Outreach Project and Sex Workers Action New York. Kate has held roles developing programming, developing trainings and technical assistance, providing peer-led interventions to harm, offering service provision, and advancing political advocacy to support the rights and well-being of people engaged in the sex trade, including victims of trafficking.

Angela Jones, State University of New York

Angela Jones is Professor of Sociology at Farmingdale State College, State University of New York. Jones's research interests include African American political thought and protest, race, gender, sexuality, sex work, feminist theory, and queer methodologies and theory. Jones is the author of Camming: Money, Power, and Pleasure in the Sex Industry (NYU Press, 2020) and African American Civil Rights: Early Activism and the Niagara Movement (Praeger, 2011). She is a co-editor of the three-volume After Marriage Equality book series (Routledge, 2018). Jones has also edited two other anthologies: The Modern African American Political Thought Reader: From David Walker to Barack Obama (Routledge, 2012), and A Critical Inquiry into Queer Utopias (Palgrave, 2013). Jones is the author of two forthcoming reference books: African American Activism and Political Engagement: An Encyclopedia of Empowerment and Black Lives Matter: A Reference Handbook (ABC-CLIO). She is also the author of numerous scholarly articles, which have been published in peer-reviewed journals such as Gender & Society, Signs: Journal of Women in Culture and Society, Sexualities, and Porn Studies. She also writes for public audiences and has published articles in venues such as Contexts (digital), The Conversation, the Nevada Independent, Peepshow Magazine, PopMatters, and Salon.

10:35 am–11:05 am

Break with Refreshments

11:05 am–12:05 pm

Fairness and Inclusion

Session Chair: Kendra Albert, Harvard University

Crypto for the People (part 2)

Wednesday, 11:05 am11:35 am

Seny Kamara, Brown University

Available Media

Cryptography underpins a multitude of critical security- and privacy-enhancing technologies. Recent advances in modern cryptography promise to revolutionize finance, cloud computing and data analytics. But cryptography does not affect everyone in the same way. In this talk, I will discuss how cryptography benefits some and not others and how cryptography research supports the powerful but not the disenfranchised.

Seny Kamara, Brown University

Seny Kamara is an Associate Professor of Computer Science at Brown University. Before joining Brown, he was a researcher at Microsoft Research.

His research is in cryptography and is driven by real-world problems from privacy, security and surveillance. He has worked extensively on the design and cryptanalysis of encrypted search algorithms, which are efficient algorithms to search on end-to-end encrypted data. He maintains interests in various aspects of theory and systems, including applied and theoretical cryptography, data structures and algorithms, databases, networking, game theory and technology policy.

At Brown, he co-directs the Encrypted Systems Lab and the Computing for the People project and is affiliated with the Center for Human Rights and Humanitarian Studies, the Policy Lab and the Data Science Initiative.

Broken CAPTCHAs and Fractured Equity: Privacy and Security in hCaptcha's Accessibility Workflow

Wednesday, 11:35 am12:05 pm

Steven Presser, Independent Researcher

Available Media

hCaptcha, a commercial CAPTCHA product, currently protects 12-15% of websites against automation, including the talk submission website for this conference. It presents humans a picture-based puzzle to solve and uses the results to label datasets. Therefore, it only provides a visual CAPTCHA. In order to comply with accessibility requirements, hCaptcha provides a special "accessibility workflow," which requires additional information from users. However, this workflow has two major issues: it could be used to de-anonymize users and can be fully automated.

In this talk, I will examine how such a system was created. I begin with a brief background on CAPTCHAs, an overview of relevant assistive technologies for people with disabilities, and how the two interact. Next, I will discuss the disparate user experiences between the mainstream workflow and the accessibility workflow – as well as the privacy implications of their differences. I will discuss the design factors and requirements hCaptcha used when designing the accessibility workflow and then summarize the automation attack, including my responsible disclosure of the attack. Finally, I will conclude with a discussion of hCaptcha’s future plans for a more inclusive and privacy-friendly CAPTCHA, as well as asking some larger questions about the future of the CAPTCHA. These include: Is the era of the CAPTCHA at an end? If so, do we replace them and with what? How do we ensure inclusive access without creating security gaps?

Steven Presser, Independent Researcher

A tinkerer of many things software, Steve has been writing code since his early teens. He was first drawn to security (and subsequently privacy)by watching a peer perform an SQL injection on one of his first large projects at age 14. Later, Steve received his Bachelors in Computer Science from Johns Hopkins University and has since worked for Microsoft, Cray, and HPE. He is currently a researcher at HLRS in Stuttgart, Germany. He has also served as an expert witness and written proof-of-concept code for a brief to the U.S. Supreme Court.

12:05 pm–1:20 pm

Lunch

1:20 pm–2:50 pm

ML Is Hard

Session Chair: Amanda Walker, Nuna

Contextual Security: A Critical Shift in Performing Threat Intelligence

Wednesday, 1:20 pm1:50 pm

Nidhi Rastogi, Rochester Institute of Technology

Available Media

An automatic, contextual, and trustworthy explanation of cyberattacks is the immediate goalpost for security experts. Achieving it requires deep knowledge of the system under attack, the attack itself, real-time data describing environmental conditions. It also requires the ability to communicate in a way that the explanation evokes experts to trust. Automating the process of communicating contextual and trustworthy explanations of cyberattacks should also handle various attack models, although it adds to the existing challenge. However, a scientific approach to addressing explanations can generate a system that can offer the desired explanations under most use cases. In this presentation, we discuss the limitations of existing machine learning-based security solutions and how contextual security solutions can address them. We share specific use cases to support our argument. We present our research on contextual security (threat intelligence using knowledge graphs) and ongoing work on explanation-based security.

Nidhi Rastogi, Rochester Institute of Technology

Dr. Nidhi Rastogi is an Assistant Professor at the Rochester Institute of Technology. Her research is at the intersection of cybersecurity, artificial intelligence, autonomous vehicles, graph analytics, and data privacy. Prior to this, she was a Research Scientist at RPI. For her contributions to cybersecurity and encouraging women in STEM, Dr. Rastogi was recognized in 2020 as an International Women in Cybersecurity by the Cyber Risk Research Institute. She was an invited speaker at Aspen Cyber Summit, SANS Cybersecurity Summit, and the Grace Hopper Conference, FADEx laureate for the 1st French-American Program on Cyber-Physical Systems’16. Dr. Rastogi is the co-chair of the DYNAMICS workshop since 2020 and serves as a PC member on several security conferences and workshops. She was a board member for N2Women (2018-20), Lexington Education Foundation (2019-Present), Feature Editor for ACM XRDS Magazine (2015-17). Dr. Rastogi has worked on the security of heterogeneous wireless networks (3G, 4G, 802.1x, 802.11), Smart Grid through engineering and research positions at Verizon and GE Global Research Center, and GE Power.

Why Has Your Toaster Been through More Security Testing than the AI System Routing Your Car?

Wednesday, 1:50 pm2:20 pm

Ram Shankar Siva Kumar, Microsoft

Available Media

If you look under your toaster, you will find a sticker with the letters "UL" on it – this is a certification from "Underwriters Laboratory" promising that the toaster is relatively safe from spontaneous combustion.

Would it not be comforting to see a sticker under your smart device that it was robustly tested for security and privacy? Or a seal of approval attesting that it is robust from adversarial manipulations?

After all, if you want to know the security checks your router has passed, you can visit the manufactuerer's page and look under the security tab and get the details. Want to know how your bank’s mobile app is keeping your data safe? Just google your bank name and the words “security” and you can see detailed information on how they adhere to industry standards to safeguard your data.

So, what gives for AI systems? AI systems are deployed in some of the most critical areas including healthcare, finance, transportation, and even cybersecurity. Why don’t we have a concrete list of assurances from these AI vendors? Moreover, if AI is just software 2.0, shouldn’t all the existing standards and certifications just directly apply? Also, securing AI systems is a universal good, right?

Ram Shankar Siva Kumar, Microsoft

Ram Shankar Siva Kumar is a Data Cowboy in Azure Security at Microsoft, empowering engineers to secure machine learning systems. His work has appeared in industry conferences like RSA, BlackHat, Defcon, BlueHat, DerbyCon, MIRCon, Infiltrate, academic conferences like NeurIPS, ICLR, ICML, IEEE S&P, ACM - CCS and covered by Bloomberg, VentureBeat, Wired, and Geekwire. He founded the Adversarial ML Threat Matrix, an ATT&CK style framework enumerating threats to machine learning. His work on adversarial machine learning appeared notably in the National Security Commission on Artificial Intelligence (NSCAI) Final report presented to the United States Congress and the President. He is an affiliate at the Berkman Klein Center for Internet and Society at Harvard University and a Technical Advisory Board Member at the University of Washington. He is currently writing his book "AI's Achilles Heel," with Hyrum Anderson, enumerating security vulnerabilities in AI systems, and why addressing them is the next infosec imperative.

Neither Band-Aids nor Silver Bullets: How Bug Bounties Can Help the Discovery, Disclosure, and Redress of Algorithmic Harms

Wednesday, 2:20 pm2:50 pm

Camille Francois and Sasha Costanza-Chock, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society

Available Media

Bug bounty programs for security vulnerabilities have received a great deal of attention in recent years, accompanied by adoption from a wide variety of organizations and a significant expansion in the numbers of participants on major platforms hosting such programs. This talk presents the conclusions of a research effort by the Algorithmic Justice League, looking at the applicability of bug bounties and related vulnerability disclosure mechanisms to the discovery, disclosure, and redress of algorithmic harms. We present a typology of design levers that characterize these different programs in the information security space, and analyze their different tradeoffs. We scrutinize a recent trend of expanding bug bounty programs to socio-technical issues, from data abuse bounties (Facebook, Google) to algorithmic biases (Rockstar Games, Twitter). Finally, we use a design justice lens to evaluate what the algorithmic harms space could borrow from these programs, and reciprocally, what traditional bug bounty programs could learn from the burgeoning algorithmic harms community.

Camille Francois, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society

Camille François (she/her) serves as co-lead of the Algorithmic Justice League’s Community Reporting of Algorithmic Harms (CRASH) project, alongside Joy Buolamwini and Sasha Costanza-Chock. She was previously Chief Innovation Officer at Graphika, where she built and led a team dedicated to mitigating disinformation harms across platforms. Prior to that, she served as a Principal Researcher at Google. She has advised governments and parliamentary committees on both sides of the Atlantic and investigated Russian interference in the 2016 U.S. presidential election on behalf of the U.S. Senate Select Intelligence Committee. She was distinguished by the MIT Technology Review in the "35 Innovators Under 35" annual award for her work leveraging data science to detect and analyze deceptive campaigns at scale, is an affiliate of the Harvard Berkman-Klein Center for Internet & Society and a lecturer at the Columbia University School of International and Public Affairs.

Sasha Costanza-Chock, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society

Sasha Costanza-Chock (they/she/elle/ella) is a researcher and designer who works to support community-led processes that build shared power, dismantle the matrix of domination, and advance ecological survival. They are a nonbinary trans* femme. Sasha is known for their work on networked social movements, transformative media organizing, and design justice. Sasha is the Director of Research & Design at the Algorithmic Justice League (ajlunited.org), a Faculty Associate with the Berkman-Klein Center for Internet & Society at Harvard University, and a member of the Steering Committee of the Design Justice Network (designjustice.org). They are the author of two books and numerous journal articles, book chapters, and other research publications. Sasha’s latest book, Design Justice: Community-Led Practices to Build the Worlds We Need, was published by the MIT Press in 2020.

2:50 pm–3:20 pm

Break with Refreshments

3:20 pm–4:50 pm

Privacy Is Hard

Session Chair: Nwokedi Idika, Google

When Machine Learning Isn’t Private

Wednesday, 3:20 pm3:50 pm

Nicholas Carlini, Google

Available Media

Current machine learning models are not private: they reveal particular details about the individual examples contained in datasets used for training. This talk studies various aspects of this privacy problem. For example, we have found that adversaries can query GPT-2 (a pretrained language model) to extract personally-identifiable information from its training set.

Preventing this leakage is difficult, and recent ad-hoc proposals are not effective. And while there exist provably-secure schemes (e.g., through differentially private gradient descent) they come at a high utility cost. We conclude with potential next steps for researchers (with problems that should be solved) and practitioners (with practical techniques to test for memorization).

Nicholas Carlini, Google

Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he has received best paper awards at ICML, USENIX Security and IEEE S&P. He obtained his PhD from the University of California, Berkeley in 2018.

Auditing Data Privacy for Machine Learning

Wednesday, 3:50 pm4:20 pm

Reza Shokri, National University of Singapore

Available Media
Large machine learning models (e.g., deep language models) memorize a significant amount of information about the individual data records in their training set. Recent inference attacks against machine learning algorithms demonstrate how an adversary can extract sensitive information about a model’s training data, by having access to its parameters or predictions. Specifically, these algorithms reflect the identification risk of models by detecting the presence of data records in the training set of a model (hence called membership inference attacks). These attacks can measure the information leakage of models about the data in their training set, thus can be used as a measure to audit privacy risks of machine learning algorithms. Based on the results of these attacks on many real world systems and datasets (e.g., Google and Amazon ML as a service platforms, federated learning algorithms, and models trained on sensitive datasets such as text data, medical, location, purchase history, image data, etc), we conclude that large models pose a significant risk to data privacy of individuals, and need to be considered as some type of personal data. Thus, we need carefully designed methodologies and tools to audit data privacy risk in machine learning in a wide range of applications. Guidances released by the European Commission and the White House call for protection of personal data during all the phases of deploying AI systems and build systems that are resistant to attacks. Recent reports published by the Information Commissioner’s Office (ICO) for auditing AI and the National Institute of Standards and Technology (NIST) for securing applications of Artificial Intelligence also highlight the privacy risk to data from machine learning models. And they specifically mention membership inference as a confidentiality violation and potential threat to the training data from models. It is recommended in the auditing framework by ICO for organizations to identify these threats and take measures to minimize the risk. As the ICO’s investigation teams will be using this framework to assess the compliance with data protection laws, organizations must account for and estimate the privacy risks to data through models. To this end, we have developed an open source tool, named ML Privacy Meter, based on membership inference algorithms, and also tech companies are using similar algorithms to analyze privacy risk in machine learning algorithms. For example, ML privacy meter and similar tools can help in data protection impact assessment (DPIA) by providing a quantitative assessment of privacy risk of a machine learning model. The tool can generate extensive privacy reports about the aggregate-level and individual-level risk with respect to training data records. It can estimate the amount of information that is revealed through the predictions of a model (when deployed) or its parameters (when shared). Hence, when providing query access to the model or revealing the entire model, the tool can be used to assess the potential threats to training data.

In this talk, I will talk about what exactly privacy risk is and what it is not, the difference between privacy and confidentiality (which can be easily confused), the reasons models are vulnerable to inference attacks, the methodology for quantifying privacy risk in machine learning, and examples of how ML privacy meter and similar tools can enable detailed auditing of ML systems. I will show the very fundamental and intuitive relation of the auditing mechanisms and defense mechanisms for privacy (e.g., differential privacy).

It is very important to ML engineers, policymakers, and researchers to be aware of the risks, their implications, and the methodology for auditing the privacy risk for different types of machine learning algorithms. This can pave the way for privacy by design for machine learning.

Reza Shokri, National University of Singapore

Reza Shokri is a NUS Presidential Young Professor of Computer Science. His research focuses on data privacy and trustworthy machine learning. He is a recipient of the IEEE Security and Privacy (S&P) Test-of-Time Award 2021, for his paper on quantifying location privacy. He received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2018, for his work on analyzing the privacy risks of machine learning models. He received the NUS Early Career Research Award 2019, VMWare Early Career Faculty Award 2021, and Intel Faculty Research Award (Private AI Collaborative Research Institute) 2021, 2022. He obtained his PhD from EPFL.

I See You Blockchain User, or Not! Privacy in the Age of Blockchains

Wednesday, 4:20 pm4:50 pm

Ghada Almashaqbeh, University of Connecticut

Available Media

Cryptocurrencies and blockchains introduced an innovative computation model that paved the way for a large variety of applications. However, lack of privacy is a huge concern, especially for permissionless public blockchains. Clients do not want their financial activity to be tracked, their pseudonym addresses to be linked to their real identities, or even worse, disclose their sensitive data when processed by smart contracts. This talk will shed light on this issue, explore current solutions and technology trends, define the gaps, and then explore the road ahead towards viable privacy solutions for private computations over blockchains.

Ghada Almashaqbeh, University of Connecticut

Ghada Almashaqbeh is an assistant professor of Computer Science and Engineering at the University of Connecticut. Her research interests span cryptography, privacy, and systems security with a large focus on blockchains and their applications. Ghada received her PhD from Columbia in 2019. Before joining UConn, she spent a while exploring the world of entrepreneurship; she was a Cofounder and Research Scientist at CacheCash, and then a Cryptographer at NuCypher. Ghada is an affiliated member at the Connecticut Advanced Computing Center (CACC) and the Engineering for Human Rights Initiative at UConn.

5:00 pm–6:30 pm

Conference Reception

Sponsored by Netflix

Thursday, February 3, 2022

7:30 am–9:00 am

Continental Breakfast

9:00 am–9:05 am

Opening Remarks, Day 3

Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter

9:05 am–10:35 am

Panel

Understanding Section 230

Thursday, 9:05 am10:35 am

Moderator: Mike Masnick, Techdirt/Copia Institute

Panelists: Cathy Gellis; Kate Klonick, St. John's Law School; Adelin Cai, Sidequest

Available Media

Section 230 of the Communications Decency Act went from what seemed like an obscure internet regulation to one that was headline news throughout the media, including commentary from Presidents and Presidential candidates. Despite this attention and despite the actual simplicity of the law, it remains widely misunderstood. On this panel experts from the legal, policy, and trust & safety world, will help explain exactly what Section 230 is, what it does, common misunderstandings, and why understanding Section 230 is important to the open internet and those who build it.

Cathy Gellis[node:field-speakers-institution]

Frustrated that people were making the law without asking for her opinion, Cathy Gellis gave up a career in web development to become a lawyer to help them not make it badly, especially regarding technology. A former aspiring journalist and longtime fan of civil liberties, her legal work includes defending the rights of Internet users and advocating for policy that protects speech and innovation. When not advising clients on platform liability, copyright, trademark, privacy, or cybersecurity she frequently writes about these subjects and more for outlets such as the Daily Beast, Law.com, and Techdirt.com, where she is a regular contributor.

Kate Klonick, St. John's Law School

Kate Klonick is an Assistant Professor at St. John's University Law School, a fellow at the Brookings Institution and Yale Law School’s Information Society Project. Her research on online speech, freedom of expression, and private governance has appeared in the Harvard Law Review, Yale Law Journal, The New Yorker, New York Times, The Atlantic, The Guardian and numerous other publications.

Adelin Cai, Founder and Principal Consultant, Sidequest

Adelin Cai has spent the last decade working with and leading teams responsible for product policies and their enforcement. As Pinterest’s former Head of Policy, she led the team that developed the company’s principles and core values around content moderation, covering a range of issues from hateful speech to medical (mis)information to dank memes. Prior to Pinterest, she ran Twitter’s Legal Ads Policy team, guiding policy and operations for Twitter’s self-serve and international advertising products. She is also a co-founder of the Trust & Safety Professional Association (TSPA) and the affiliated Trust & Safety Foundation (TSF). She currently serves as TSPA’s board chair.

10:35 am–11:05 am

Break with Refreshments

11:05 am–12:05 pm

Money Talks

Session Chair: Melanie Ensign, Discernible, Inc.

Covenants without the Sword: Market Incentives for Security Investment

Thursday, 11:05 am11:35 am

Vaibhav Garg, Comcast Cable

Available Media

Two decades of economics research has repeatedly made the assertion that organizations as well as individuals do not have adequate incentive to invest in cybersecurity. Absent security, associated costs are imposed on third parties rather than producers of insecurity. Cybersecurity is thus a private good with externalities, one that will require regulation to prevent market failure. Underlying this body of research is the assumption that all organizations have the same business drivers, a similar attack surface, and a uniformly informed consumer base. This talk questions these assumptions and outlines seven naturally occurring incentives for organizations to invest in cybersecurity. Furthermore, I provide examples of how these incentives have driven investment in cybersecurity across different sectors. While the applicability of these incentives differs both across and within sectors, any cybersecurity public policy interventions must consider the resulting nuances. Cybersecurity covenants established absent the sword of regulation may be both more effective and sustainable, as they evolve with the experience and exposure of the stakeholders.

Vaibhav Garg, Comcast Cable

Vaibhav Garg is the Sr. Director of Cybersecurity Research & Public Policy at Comcast Cable. He has a PhD in Security Informatics from Indiana University and a M.S. in Information Security from Purdue University. His research investigates the intersection of cybersecurity, economics, and public policy. He has co-authored over thirty peer reviewed publications and received the best paper award at the 2011 eCrime Researcher's Summit for his work on the economics of cybercrime. He previously served as the Editor in Chief of ACM Computers & Society, where he received the ACM SIGCAS Outstanding Service Award.

The Security Team at the Top: The Board of Directors

Thursday, 11:35 am12:05 pm

Anthony Vance, Virginia Tech

Available Media

There are many teams in security—blue teams, red teams, purple teams, etc. This talk is about the security team that few people think about but has the potential to be the most powerful and influential security team in the organization: the board of directors. Through in-depth interviews of board directors, CISOs, and senior-level consultants who advise boards on security, I illustrate challenges that CISOs face in meaningfully engaging with the board of directors. I also show how CISOs can gain strategic importance in supporting and advising the board. Finally, I describe ways that CISOs can help boards realize their potential as the most powerful security team in the organization.

Anthony Vance, Virginia Tech

Anthony Vance is a Professor and Commonwealth Cyber Initiative Fellow in the Department of Business Information Technology of the Pamplin College of Business at Virginia Tech. He earned Ph.D. degrees in Information Systems from Georgia State University, USA; the University of Paris—Dauphine, France; and the University of Oulu, Finland. Previous to his PhD studies, he worked as a cybersecurity consultant at Deloitte. His research focuses on how to help individuals and organizations improve their cybersecurity posture, particularly from behavioral, organizational, and neuroscience perspectives. His work is published in outlets such as MIS Quarterly, Information Systems Research, Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), Workshop on the Economics of Information Security (WEIS), the Symposium on Usable Privacy and Security (SOUPS), and other outlets. He currently is a senior editor at MIS Quarterly.

12:05 pm–12:50 pm

Fireside Chat

Fireside Chat with Sheera Frenkel

Thursday, 12:05 pm12:50 pm

Moderator: Bob Lord
Speaker: Sheera Frenkel, New York Times

Available Media

In this fireside chat, Bob Lord will interview Sheera Frenkel, cybersecurity reporter for the New York Times, and author. Her latest book is "An Ugly Truth: Inside Facebook's Battle for Domination." Sheera and Bob will cover topics including cybersecurity, disinformation, hate speech, and journalism.

Bob Lord[node:field-speakers-institution]

Bob Lord most recently served as the first Chief Security Officer at the Democratic National Committee. In that role he worked to secure the Committee, as well as helping state parties and campaigns with their security programs. Previous roles include CISO at Yahoo, CISO in Residence at Rapid 7, and before that he headed up Twitter’s information security program as its first security hire. You can see some of his hobbies at https://www.ilord.com.

Sheera Frenkel, New York Times

Sheera Frenkel is an award-winning technology reporter based in San Francisco. In 2018, she was part of a team of New York Times reporters who were finalists for the Pulitzer Prize and in 2019 she won a Loeb award for her reporting on Facebook. In 2021, she and Cecilia Kang published, "An Ugly Truth: Inside Facebook's Battle for Domination." The book became a New York Times and International Bestseller.

Previously, she spent over a decade in the Middle East as a foreign correspondent, reporting for BuzzFeed, NPR, The Times of London and McClatchy Newspapers.

12:50 pm–2:05 pm

Lunch

2:05 pm–3:35 pm

Following the Rules

Session Chair: Wendy Seltzer, W3C

Healthcare Ecosystem: Security's Role in Helping HealthTech Find Its Way

Thursday, 2:05 pm2:35 pm

Joy Forsythe, Alto Pharmacy

Available Media

As the news fills up with ransomware attacks on health systems, the HealthTech startup space is booming. Why can’t our healthcare be as modern and friendly as ordering a pair of shoes or getting dinner reservations?

I made the leap from building enterprise security products to HealthTech startups 5 years ago with idealism about how technology could fix things and I’m here to share my "lessons learned." Like many other "disruptors," HealthTech companies are discovering that a lot of difficult security choices were made for a reason -- sometimes because the alternative is life-threatening -- or that alternative will take a long time to change.

Most importantly, healthcare is its own ecosystem that we have to understand before we can reason about it. Once I started to understand who the different entities were and how new startups fit in that system, I started to identify places where security can innovate and do better.

Joy Forsythe, Alto Pharmacy

Joy Marie Forsythe spent the past 5 years building security programs at Alto Pharmacy and Mango Health, focusing on how to be respectful of patients’ security and privacy. Prior to that, she worked on software security, security monitoring and building enterprise security tools at Fortify Software, HP, and Arcsight.

The Global Privacy Control: Exercising Legal Rights at Scale

Thursday, 2:35 pm3:05 pm

Justin Brookman, Consumer Reports

Available Media

New privacy laws around the world are give consumers the right to stop unwanted processing of their personal information. But how can we be expected to tell thousands of different companies that we don't want our data sold? "Do Not Track" was an early effort to give consumers scalable privacy rights, but the effort foundered without the weight of the law behind it. Now, new privacy laws are both creating new rights and letting consumers delegate to others the ability to exercise those rights. Universal signals and global settings now may legally bind companies and expose them to liability for ignoring user preferences. The Global Privacy Control is one effort to allow consumers to transmit to every website they visit a request not to have their data shared with others. This and similar efforts may illuminate how to make privacy rights practically workable in the future.

Justin Brookman, Consumer Reports

Justin Brookman is the Director of Consumer Privacy and Technology Policy for Consumer Reports. Justin is responsible for helping the organization continue its groundbreaking work to shape the digital marketplace in a way that empowers consumers and puts their data privacy and security needs first. This work includes using CR research to identify critical gaps in consumer privacy, data security, and technology law and policy. Justin also builds strategies to expand the use and influence of the Digital Standard, developed by CR and partner organizations to evaluate the privacy and security of products and services.

Prior to joining CR, Brookman was Policy Director of the Federal Trade Commission’s Office of Technology Research and Investigation. At the FTC, Brookman conducted and published original research on consumer protection concerns raised by emerging technologies such as cross-device tracking, smartphone security, and the internet of things. He also helped to initiate and investigate enforcement actions against deceptive or unfair practices, including actions against online data brokers and digital tracking companies.

He previously served as Director of Consumer Privacy at the Center for Democracy & Technology, a digital rights nonprofit, where he coordinated the organization’s advocacy for stronger protections for personal information in the U.S. and Europe.

An Open-Source Taxonomy for Ex-ante Privacy

Thursday, 3:05 pm3:35 pm

Cillian Kieran, Ethyca

Available Media

Most current approaches to enterprise data privacy suffer from the ex-post nature of their application. Applications purporting to orchestrate crucial privacy tasks like access control, rights fulfillment, or risk assessment get bolted on to pre-existing systems and must dynamically respond to an underlying web of data flows that is poorly described, ever-evolving, and complex. It's a Sisyphean challenge that afflicts some of the most sophisticated technology enterprises operating today, to say nothing of non-digitally native legacy enterprises.

In this presentation, Cillian Kieran, Founder and CEO of Ethyca, will argue that the only way to meaningfully solve this important problem is to apply privacy protections at the start of the software delivery lifecycle rather than at the finish, and will propose one approach for doing so.

He'll demonstrate the benefits of ex-ante privacy by walking through a set of annotation and risk evaluation tools built on top of an open-source privacy taxonomy derived from the ISO/IEC 27701 standards. Cillian's presentation will show how an engineer can annotate projects, evaluate privacy risks in CI pipelines, and enable privacy rights to be enacted on data stored in annotated systems.

This will be a first public walkthrough of an open-source project that has been years development and has received interest from data engineering teams at some of the world's largest companies.

Cillian Kieran, CEO, Ethyca

Cillian Kieran is the CEO and founder of privacy tech company Ethyca. A background in software engineering and two decades spent leading large-scale data programs for Heineken, Sony, Dell, and Pepsi convinced him there was a better way to build trust deeper into large, distributed systems. Now, Ethyca powers privacy for tech companies, including Away, Slice, Codecademy, Invision, Hopin, Casper, and more.

3:35 pm–3:50 pm

Closing Remarks

Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter