All the times listed below are in Pacific Standard Time (PST).
9:00 am–9:15 am
Opening Remarks, Day 1
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter
9:15 am–10:45 am
Disinformation
Session Chair: Kate McKinley, Woven Planet
#Protect2020: An After Action Report
Chris Krebs, Founding Partner, KS Group
Chris Krebs, Founding Partner, KS Group
Chris Krebs served as the first Director of the U.S. Cybersecurity and Infrastructure Security Agency. With a long career as a cyber-policy expert in the private and public sector, Chris builds coalitions to address today's and tomorrow's challenges.
Around the World in 500 Days of Pandemic Misinformation
Patrick Gage Kelley, Google
The Covid-19 pandemic has given us a unique opportunity to investigate how misinformation narratives spread and evolve around the world. Throughout 2020 and 2021, we conducted regular surveys of over 50 thousand people from a dozen countries about their self-reported exposure to pandemic-related misinformation and their belief in those narratives. This large scale, longitudinal measurement provides a unique lens to understand how misinformation narratives resonate throughout the world, how the belief in these narratives evolves over time, and how ultimately misinformation affects personal health decisions such as vaccination. In this talk, we will share the key insights gleaned throughout this study that in turn help inform efforts to fight multiple types of misinformation.
Patrick Gage Kelley, Google
Can the Fight against Disinformation Really Scale?
Gillian "Gus" Andrews, Theorem Media and Front Line Defenders
The past few years have seen a surge of interest and funding in fighting disinformation. Rumors and conspiracy theories have disrupted democratic process from Brazil to India, to the halls of Congress in the United States; they have hobbled the success of the fight against COVID. Many proposed solutions hinge either on "fact-checking" or on using AI to identify and defuse disinformation on a large scale.
We can try to scale the fight against disinformation with machine learning. But what is it that we are trying to scale? Are we certain that hearts and minds can meaningfully be changed at scale? What would that effort look like?
This talk will challenge a key assumption currently made in fighting disinformation: that "trustworthiness" is a property of information, not of the people who spread it, and that trust is a human quality that can be generated at scale. Dr. Andrews will lay out findings from science and technology studies, neurocognitive development, and "new literacies" research to point to best practices and new approaches to the disinformation problem.
Gillian "Gus" Andrews, Theorem Media and Front Line Defenders
10:45 am–11:15 am
Break with Refreshments
11:15 am–12:45 pm
Humans Are Hard
Session Chair: Antonela Debiasi, The Tor Project
Thinking Slow: Exposing Influence as a Hallmark of Cyber Social Engineering and Human-Targeted Deception
Mirela Silva, University of Florida
Use of influence tactics (persuasion, emotional, gain/loss framing) is key in many human interactions, including advertisements, written requests, and news articles. However, they have been used and abused for cyber social engineering and human-targeted attacks, such as phishing, disinformation, and deceptive ads. In this emerging deceptive and abusive online ecosystem, important research questions emerge: Does deceptive material online leverage influence disproportionately, compared to innocuous, neutral texts? Can machine learning methods accurately expose the influence in text as part of user interventions to prevent them from being deceived by triggering their more analytical thinking mode? In this talk, I present my research on Lumen (a learning-based framework that exposes influence cues in texts) and Potentiam (a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news). Potentiam was labeled by multiple annotators following a carefully designed qualitative methodology. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.
Mirela Silva, University of Florida
Burnout and PCSD: Placing Team At Risk
Chloé Messdaghi, Cybersecurity Disruption Consultant and Researcher
From the pandemic, we have changed and transformed in ways we are still trying to discover. The effects have caused incredible burnout amongst colleagues and personal relationships, and has in ways, impacted managers, teams, and company structure and policies. It is not just burnout. We have another deeper issue that is becoming prevalent, Post-COVID Stress Disorder (PCSD). As an industry, we need to be aware of the seriousness of burnout, and recognize the role we play in mental health. This talk discusses burnout, what that means for security and the well-being of companies, and solutions to support one another as we proceed into a new era post-pandemic.
Chloé Messdaghi, Cybersecurity Disruption Consultant and Researcher
Leveraging Human Factors to Stop Dangerous IoT
Dr. Sanchari Das, University of Denver
Even the largest enterprise can be subverted with a small device quietly tunneling through the network boundaries. One way to mitigate the damage is to purchase the higher quality IoT devices, to increase security before installation. In this work, we evaluated the purchase of a few devices that appear relatively harmless but create significant risk. Any workplace may have a small crockpot show up in the break room, or an employee with a fitness tracker. These may offer access to all Bluetooth Low Energy (BLE) devices, or real time audio surveillance. Alternative models of the same devices, without the corresponding risk, show the value of careful IoT selection. Yet an employee can not be expected to understand the security risk of IoT devices. To address this understanding and motivation gap, we present a security-enhancing interaction that provides an effective, acceptable, usable framing for non-technical people making IoT purchase decisions. The interface design nudges users to make risk-averse choices by integrating psychological factors in the presentation of the options. Participants using this purchasing interaction consistently avoided low security and high risk IoT products, even paying more than twice ($6.99 versus $17.95) to select a secure smart device over alternatives. We detail how the nudges were designed, and why they are effective. Specifically, our Amazon store wrapper integrated positive framing, risk communication, and the endowment effect in one interaction design. The result is a system that significantly changes human decision-making, incorporating security the default choice. This was a collaboration between Prof. Sanchari Das at the University of Denver with Shakthidhar Gopavaram and Prof. L. Jean Camp at Indiana University Bloomington.
Sanchari Das, University of Denver
12:45 pm–2:00 pm
Lunch
2:00 pm–3:30 pm
Hate and Encryption
Session Chair: Jon Callas, The Electronic Frontier Foundation
You Can’t Always Get What You Want / But You Get What You Need: Moderating E2EE Content
Mallory Knodel, Center for Democracy & Technology
End-to-end encryption (E2EE) is an application of cryptography in online communications systems between endpoints. E2EE systems are unique in providing features of confidentiality, integrity and authenticity for users, yet these strong privacy and free expression guarantees create tension with legitimate needs for information controls. This talk proposes formal, feature- and requirement-based, and user centric definitions of end-to-end encryption that in aggregate are able to confront these tensions. Any improvements to E2EE should therefore strive to maximise the system's unique properties (confidentiality, integrity, authenticity), security and privacy goals, while balancing user experience through enhanced usability and availability. Concrete proposals for E2EE improvements were analysed thusly and the results will be presented. Improving mechanisms for user reporting and using existing metadata for platform abuse analysis are the most likely to preserve privacy and security guarantees for end-users, while also improving user experience. Both provide effective tools that can detect significant amounts of different types of problematic content on E2EE services, including abusive and harassing messages, spam, mis- and disinformation, and CSAM. Future research to improve these tools should measure efficacy for users while preserving E2EE systems’ unique guarantees.
Mallory Knodel, Center for Democracy & Technology
Mallory Knodel is the CTO at the Center for Democracy & Technology in Washington, DC. She is the co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force, co-chair of the Stay Home Meet Only Online working group of the IETF and an advisor to the Freedom Online Coalition. Mallory takes a human rights, people-centred approach to technology implementation and cybersecurity policy advocacy. Originally from the US, she has worked with grassroots organisations around the world. She has used free software throughout her professional career and considers herself a public interest technologist. She holds a B.S. in Physics and Mathematics and an M.A. in Science Education.
Content-Oblivious Trust and Safety Techniques: Results from a Survey of Online Service Providers
Riana Pfefferkorn, Stanford Internet Observatory
In pressuring online service providers to better police harmful content on their services, regulators tend to focus on trust and safety techniques, such as automated systems for scanning or filtering content on a service, that depend on the provider's capability to access the contents of users' files and communications at will. I call these techniques content-dependent. The focus on content analysis overlooks the prevalence and utility of what I call content-oblivious techniques: ones that do not rely on guaranteed at-will access to content, such as metadata-based tools and users' reports flagging abuse which the provider did not (or could not) detect on its own.
This talk presents the results of a survey about the trust and safety techniques employed by a group of online service providers that collectively serve billions of users. The survey finds that abuse-reporting features are used by more providers than other techniques such as metadata-based abuse detection or automated systems for scanning content, but that the providers' abuse-reporting tools do not consistently cover the various types of abuse that users may encounter on their services, a gap I recommend they rectify. Finally, despite strong consensus among participating providers that automated content scanning is the most useful means of detecting child sex abuse imagery, they do not consider it to be nearly so useful for other kinds of abuse.
These results indicate that content-dependent techniques are not a silver bullet against abuse. They also indicate that the marginal impact on providers' anti-abuse efforts of end-to-end encryption (which, controversially, stymies providers' ability to access user content at will) can be expected to vary by abuse type. These findings have implications for policy debates over the regulation of online service providers' anti-abuse obligations and their use of end-to-end encryption.
Riana Pfefferkorn, Stanford Internet Observatory
Rethinking "Security" in an Era of Online Hate and Harassment
Kurt Thomas, Google
While most security and anti-abuse protections narrowly focus on for-profit cybercrime today, we show how hate and harassment has grown and transformed the day-to-day threats experienced by Internet users. We provide a breakdown of the different classes of threats (such as coordinated mobs posting toxic content, anonymous peers breaking into a target’s account to leak personal photos, or intimate partner violence involving tracking and surveillance) and map these to traditional security or anti-abuse principles where existing solutions might help. We also provide prevalence estimates for each class of attack based on survey results from 22 countries and 50,000 participants. We find over 48% of people have experienced hate and harassment online, with a higher incidence rate among young people (18-24), LGBTQ+ individuals, and active social media users. We also highlight current gaps in protections, such as toxic comment classification, where differing personal interpretations of what constitutes hate and harassment results in uneven protections across users, especially at-risk populations. Our goal with this talk is to raise awareness of the changing abuse landscape online and to highlight the vital role that security practitioners and engineers can play in addressing these threats.
Kurt Thomas, Google
3:30 pm–4:00 pm
Break with Refreshments
4:00 pm–5:00 pm
Make Attacks Hard
Session Chair: Swathi Joshi, Oracle
Detection Is Not Enough: Attack Recovery for Safe and Robust Autonomous Robotic Vehicles
Pritam Dash, University of British Columbia
Autonomous Robotic Vehicles (RV) such as drones and rovers rely extensively on sensor measurements to perceive their physical states and the environment. For example, a GPS provides geographic position information, a gyroscope sensor measures angular velocities, an accelerometer measures linear accelerations. Attacks such as sensor tampering and spoofing can feed erroneous sensor measurements through external means that may deviate RVs from their course and result in mission failures. Attacks such as GPS spoofing have been performed against military drones and marine navigation systems. Prior work in the security of autonomous RVs mainly focuses on attack detection. However, detection alone is not enough, because it does not prevent adverse consequences such as drastic deviation and/or crash. The key question, "how to respond once an attack is detected in an RV?" still remains unanswered.
In this talk, we present two novel frameworks that provide safe response to attacks and allow RVs to continue the mission despite the malicious intervention. The first technique uses a Feed-Forward controller (FFC) which runs in tandem with RV’s primary controller and monitors it. When an attack is detected, the FFC takes over to recover the RV. The second technique identifies and isolates the sensor(s) under attack - this prevents the corrupted measurements from affecting the actuator signals. Then, it uses historic states to estimate RV’s current state and ensures stable operation even under attacks.
Pritam Dash, University of British Columbia
Teaching an Old Dog New Tricks: Reusing Security Solutions in Novel Domains
Graham Bleaney, Meta
The security industry has spent decades building up tooling and knowledge on how to detect flaws in software that lead to vulnerabilities. To detect a breadth of vulnerabilities, these tools are built to identify general patterns such as data following from a source to a sink. These generalized patterns also map to problems in domains a diverse as performance, compliance, privacy, and data abuse. In this talk, I’ll present a series of case studies to show how Meta engineers have applied our security tools to detect and prevent implementation flaws in domains such as these.
I’ll go deep on a case study showing how static taint flow analysis —a tool Meta first deployed for security purposes— helped us make sure we weren’t storing or misusing user locations when we launched Instagram Threads. Then, to show that that case study was not an isolated example, I’ll more quickly walk through a half dozen additional examples where tools from our Product Security team have been used to check for implementation flaws in other domains. Finally, we’ll discuss the limitations of this approach, stemming from the tools themselves, differing organizational structures, and the ever-present need for defense in depth.
By the end of this talk, you should walk away brimming with ideas on new applications for your organization’s existing security tooling.
Graham Bleaney, Meta
5:00 pm–6:30 pm
Conference Reception
Sponsored by Google
9:00 am–9:05 am
Opening Remarks, Day 2
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter
9:05 am–10:35 am
Panel
Sex Work, Tech, and Surveillance
Moderator: Elissa M. Redmiles, Max Planck Institute for Software Systems
Panelists: Kendra Albert, Harvard Law School; Kate D'Adamo, Reframe Health and Justice; Angela Jones, State University of New York
In this panel, four experts will discuss the influence of technology & policy on the livelihoods and wellbeing of sex workers. We will discuss the ever changing landscape of regulation and efforts to remove sex & sex workers from the internet and the role of digital security & privacy and the experts who develop technologies to preserve it.
Elissa M. Redmiles, Max Planck Institute for Software Systems
Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She has additionally served as a consultant and researcher at multiple institutions, including Microsoft Research, Facebook, the World Bank, the Center for Democracy and Technology, and the University of Zurich. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as the New York Times, Scientific American, Rolling Stone, Wired, Business Insider, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards from Facebook as well as the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.
Kendra Albert, Harvard Law School
Kendra Albert is a clinical instructor at the Cyberlaw Clinic at Harvard Law School, where they teach students to practice technology law by working with pro bono clients. Their practice areas include freedom of expression, computer security, and intellectual property law. Kendra also publishes on gender, adversarial machine learning, and power. They hold a law degree from Harvard Law School and serve on the board of the ACLU of Massachusetts and the Tor Project. They also are a legal advisor for Hacking // Hustling, a collective of sex workers, survivors, and accomplices working at the intersection of tech and social justice to interrupt state surveillance and violence facilitated by technology.
Kate D'Adamo, Reframe Health and Justice
Kate D‘Adamo is a sex worker rights advocate with a focus on economic justice, anti-policing and incarceration and public health. Previously, she was the National Policy Advocate at the Sex Workers Project and a community organizer and advocate with the Sex Workers Outreach Project and Sex Workers Action New York. Kate has held roles developing programming, developing trainings and technical assistance, providing peer-led interventions to harm, offering service provision, and advancing political advocacy to support the rights and well-being of people engaged in the sex trade, including victims of trafficking.
Angela Jones, State University of New York
Angela Jones is Professor of Sociology at Farmingdale State College, State University of New York. Jones's research interests include African American political thought and protest, race, gender, sexuality, sex work, feminist theory, and queer methodologies and theory. Jones is the author of Camming: Money, Power, and Pleasure in the Sex Industry (NYU Press, 2020) and African American Civil Rights: Early Activism and the Niagara Movement (Praeger, 2011). She is a co-editor of the three-volume After Marriage Equality book series (Routledge, 2018). Jones has also edited two other anthologies: The Modern African American Political Thought Reader: From David Walker to Barack Obama (Routledge, 2012), and A Critical Inquiry into Queer Utopias (Palgrave, 2013). Jones is the author of two forthcoming reference books: African American Activism and Political Engagement: An Encyclopedia of Empowerment and Black Lives Matter: A Reference Handbook (ABC-CLIO). She is also the author of numerous scholarly articles, which have been published in peer-reviewed journals such as Gender & Society, Signs: Journal of Women in Culture and Society, Sexualities, and Porn Studies. She also writes for public audiences and has published articles in venues such as Contexts (digital), The Conversation, the Nevada Independent, Peepshow Magazine, PopMatters, and Salon.
10:35 am–11:05 am
Break with Refreshments
11:05 am–12:05 pm
Fairness and Inclusion
Session Chair: Kendra Albert, Harvard University
Crypto for the People (part 2)
Seny Kamara, Brown University
Cryptography underpins a multitude of critical security- and privacy-enhancing technologies. Recent advances in modern cryptography promise to revolutionize finance, cloud computing and data analytics. But cryptography does not affect everyone in the same way. In this talk, I will discuss how cryptography benefits some and not others and how cryptography research supports the powerful but not the disenfranchised.
Seny Kamara, Brown University
Seny Kamara is an Associate Professor of Computer Science at Brown University. Before joining Brown, he was a researcher at Microsoft Research.
His research is in cryptography and is driven by real-world problems from privacy, security and surveillance. He has worked extensively on the design and cryptanalysis of encrypted search algorithms, which are efficient algorithms to search on end-to-end encrypted data. He maintains interests in various aspects of theory and systems, including applied and theoretical cryptography, data structures and algorithms, databases, networking, game theory and technology policy.
At Brown, he co-directs the Encrypted Systems Lab and the Computing for the People project and is affiliated with the Center for Human Rights and Humanitarian Studies, the Policy Lab and the Data Science Initiative.
Broken CAPTCHAs and Fractured Equity: Privacy and Security in hCaptcha's Accessibility Workflow
Steven Presser, Independent Researcher
hCaptcha, a commercial CAPTCHA product, currently protects 12-15% of websites against automation, including the talk submission website for this conference. It presents humans a picture-based puzzle to solve and uses the results to label datasets. Therefore, it only provides a visual CAPTCHA. In order to comply with accessibility requirements, hCaptcha provides a special "accessibility workflow," which requires additional information from users. However, this workflow has two major issues: it could be used to de-anonymize users and can be fully automated.
In this talk, I will examine how such a system was created. I begin with a brief background on CAPTCHAs, an overview of relevant assistive technologies for people with disabilities, and how the two interact. Next, I will discuss the disparate user experiences between the mainstream workflow and the accessibility workflow – as well as the privacy implications of their differences. I will discuss the design factors and requirements hCaptcha used when designing the accessibility workflow and then summarize the automation attack, including my responsible disclosure of the attack. Finally, I will conclude with a discussion of hCaptcha’s future plans for a more inclusive and privacy-friendly CAPTCHA, as well as asking some larger questions about the future of the CAPTCHA. These include: Is the era of the CAPTCHA at an end? If so, do we replace them and with what? How do we ensure inclusive access without creating security gaps?
Steven Presser, Independent Researcher
12:05 pm–1:20 pm
Lunch
1:20 pm–2:50 pm
ML Is Hard
Session Chair: Amanda Walker, Nuna
Contextual Security: A Critical Shift in Performing Threat Intelligence
Nidhi Rastogi, Rochester Institute of Technology
An automatic, contextual, and trustworthy explanation of cyberattacks is the immediate goalpost for security experts. Achieving it requires deep knowledge of the system under attack, the attack itself, real-time data describing environmental conditions. It also requires the ability to communicate in a way that the explanation evokes experts to trust. Automating the process of communicating contextual and trustworthy explanations of cyberattacks should also handle various attack models, although it adds to the existing challenge. However, a scientific approach to addressing explanations can generate a system that can offer the desired explanations under most use cases. In this presentation, we discuss the limitations of existing machine learning-based security solutions and how contextual security solutions can address them. We share specific use cases to support our argument. We present our research on contextual security (threat intelligence using knowledge graphs) and ongoing work on explanation-based security.
Nidhi Rastogi, Rochester Institute of Technology
Why Has Your Toaster Been through More Security Testing than the AI System Routing Your Car?
Ram Shankar Siva Kumar, Microsoft
If you look under your toaster, you will find a sticker with the letters "UL" on it – this is a certification from "Underwriters Laboratory" promising that the toaster is relatively safe from spontaneous combustion.
Would it not be comforting to see a sticker under your smart device that it was robustly tested for security and privacy? Or a seal of approval attesting that it is robust from adversarial manipulations?
After all, if you want to know the security checks your router has passed, you can visit the manufactuerer's page and look under the security tab and get the details. Want to know how your bank’s mobile app is keeping your data safe? Just google your bank name and the words “security” and you can see detailed information on how they adhere to industry standards to safeguard your data.
So, what gives for AI systems? AI systems are deployed in some of the most critical areas including healthcare, finance, transportation, and even cybersecurity. Why don’t we have a concrete list of assurances from these AI vendors? Moreover, if AI is just software 2.0, shouldn’t all the existing standards and certifications just directly apply? Also, securing AI systems is a universal good, right?
Ram Shankar Siva Kumar, Microsoft
Ram Shankar Siva Kumar is a Data Cowboy in Azure Security at Microsoft, empowering engineers to secure machine learning systems. His work has appeared in industry conferences like RSA, BlackHat, Defcon, BlueHat, DerbyCon, MIRCon, Infiltrate, academic conferences like NeurIPS, ICLR, ICML, IEEE S&P, ACM - CCS and covered by Bloomberg, VentureBeat, Wired, and Geekwire. He founded the Adversarial ML Threat Matrix, an ATT&CK style framework enumerating threats to machine learning. His work on adversarial machine learning appeared notably in the National Security Commission on Artificial Intelligence (NSCAI) Final report presented to the United States Congress and the President. He is an affiliate at the Berkman Klein Center for Internet and Society at Harvard University and a Technical Advisory Board Member at the University of Washington. He is currently writing his book "AI's Achilles Heel," with Hyrum Anderson, enumerating security vulnerabilities in AI systems, and why addressing them is the next infosec imperative.
Neither Band-Aids nor Silver Bullets: How Bug Bounties Can Help the Discovery, Disclosure, and Redress of Algorithmic Harms
Camille Francois and Sasha Costanza-Chock, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society
Bug bounty programs for security vulnerabilities have received a great deal of attention in recent years, accompanied by adoption from a wide variety of organizations and a significant expansion in the numbers of participants on major platforms hosting such programs. This talk presents the conclusions of a research effort by the Algorithmic Justice League, looking at the applicability of bug bounties and related vulnerability disclosure mechanisms to the discovery, disclosure, and redress of algorithmic harms. We present a typology of design levers that characterize these different programs in the information security space, and analyze their different tradeoffs. We scrutinize a recent trend of expanding bug bounty programs to socio-technical issues, from data abuse bounties (Facebook, Google) to algorithmic biases (Rockstar Games, Twitter). Finally, we use a design justice lens to evaluate what the algorithmic harms space could borrow from these programs, and reciprocally, what traditional bug bounty programs could learn from the burgeoning algorithmic harms community.
Camille Francois, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society
Sasha Costanza-Chock, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society
2:50 pm–3:20 pm
Break with Refreshments
3:20 pm–4:50 pm
Privacy Is Hard
Session Chair: Nwokedi Idika, Google
When Machine Learning Isn’t Private
Nicholas Carlini, Google
Current machine learning models are not private: they reveal particular details about the individual examples contained in datasets used for training. This talk studies various aspects of this privacy problem. For example, we have found that adversaries can query GPT-2 (a pretrained language model) to extract personally-identifiable information from its training set.
Preventing this leakage is difficult, and recent ad-hoc proposals are not effective. And while there exist provably-secure schemes (e.g., through differentially private gradient descent) they come at a high utility cost. We conclude with potential next steps for researchers (with problems that should be solved) and practitioners (with practical techniques to test for memorization).
Nicholas Carlini, Google
Auditing Data Privacy for Machine Learning
Reza Shokri, National University of Singapore
In this talk, I will talk about what exactly privacy risk is and what it is not, the difference between privacy and confidentiality (which can be easily confused), the reasons models are vulnerable to inference attacks, the methodology for quantifying privacy risk in machine learning, and examples of how ML privacy meter and similar tools can enable detailed auditing of ML systems. I will show the very fundamental and intuitive relation of the auditing mechanisms and defense mechanisms for privacy (e.g., differential privacy).
It is very important to ML engineers, policymakers, and researchers to be aware of the risks, their implications, and the methodology for auditing the privacy risk for different types of machine learning algorithms. This can pave the way for privacy by design for machine learning.
Reza Shokri, National University of Singapore
I See You Blockchain User, or Not! Privacy in the Age of Blockchains
Ghada Almashaqbeh, University of Connecticut
Cryptocurrencies and blockchains introduced an innovative computation model that paved the way for a large variety of applications. However, lack of privacy is a huge concern, especially for permissionless public blockchains. Clients do not want their financial activity to be tracked, their pseudonym addresses to be linked to their real identities, or even worse, disclose their sensitive data when processed by smart contracts. This talk will shed light on this issue, explore current solutions and technology trends, define the gaps, and then explore the road ahead towards viable privacy solutions for private computations over blockchains.
Ghada Almashaqbeh, University of Connecticut
5:00 pm–6:30 pm
Conference Reception
Sponsored by Netflix
9:00 am–9:05 am
Opening Remarks, Day 3
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter
9:05 am–10:35 am
Panel
Understanding Section 230
Moderator: Mike Masnick, Techdirt/Copia Institute
Panelists: Cathy Gellis; Kate Klonick, St. John's Law School; Adelin Cai, Sidequest
Section 230 of the Communications Decency Act went from what seemed like an obscure internet regulation to one that was headline news throughout the media, including commentary from Presidents and Presidential candidates. Despite this attention and despite the actual simplicity of the law, it remains widely misunderstood. On this panel experts from the legal, policy, and trust & safety world, will help explain exactly what Section 230 is, what it does, common misunderstandings, and why understanding Section 230 is important to the open internet and those who build it.
Mike Masnick, Techdirt/Copia Institute
Mike Masnick is the founder & editor of the popular Techdirt blog as well as the founder of the Silicon Valley think tank, the Copia Institute.
Cathy Gellis[node:field-speakers-institution]
Frustrated that people were making the law without asking for her opinion, Cathy Gellis gave up a career in web development to become a lawyer to help them not make it badly, especially regarding technology. A former aspiring journalist and longtime fan of civil liberties, her legal work includes defending the rights of Internet users and advocating for policy that protects speech and innovation. When not advising clients on platform liability, copyright, trademark, privacy, or cybersecurity she frequently writes about these subjects and more for outlets such as the Daily Beast, Law.com, and Techdirt.com, where she is a regular contributor.
Kate Klonick, St. John's Law School
Kate Klonick is an Assistant Professor at St. John's University Law School, a fellow at the Brookings Institution and Yale Law School’s Information Society Project. Her research on online speech, freedom of expression, and private governance has appeared in the Harvard Law Review, Yale Law Journal, The New Yorker, New York Times, The Atlantic, The Guardian and numerous other publications.
Adelin Cai, Founder and Principal Consultant, Sidequest
Adelin Cai has spent the last decade working with and leading teams responsible for product policies and their enforcement. As Pinterest’s former Head of Policy, she led the team that developed the company’s principles and core values around content moderation, covering a range of issues from hateful speech to medical (mis)information to dank memes. Prior to Pinterest, she ran Twitter’s Legal Ads Policy team, guiding policy and operations for Twitter’s self-serve and international advertising products. She is also a co-founder of the Trust & Safety Professional Association (TSPA) and the affiliated Trust & Safety Foundation (TSF). She currently serves as TSPA’s board chair.
10:35 am–11:05 am
Break with Refreshments
11:05 am–12:05 pm
Money Talks
Session Chair: Melanie Ensign, Discernible, Inc.
Covenants without the Sword: Market Incentives for Security Investment
Vaibhav Garg, Comcast Cable
Two decades of economics research has repeatedly made the assertion that organizations as well as individuals do not have adequate incentive to invest in cybersecurity. Absent security, associated costs are imposed on third parties rather than producers of insecurity. Cybersecurity is thus a private good with externalities, one that will require regulation to prevent market failure. Underlying this body of research is the assumption that all organizations have the same business drivers, a similar attack surface, and a uniformly informed consumer base. This talk questions these assumptions and outlines seven naturally occurring incentives for organizations to invest in cybersecurity. Furthermore, I provide examples of how these incentives have driven investment in cybersecurity across different sectors. While the applicability of these incentives differs both across and within sectors, any cybersecurity public policy interventions must consider the resulting nuances. Cybersecurity covenants established absent the sword of regulation may be both more effective and sustainable, as they evolve with the experience and exposure of the stakeholders.
Vaibhav Garg, Comcast Cable
The Security Team at the Top: The Board of Directors
Anthony Vance, Virginia Tech
There are many teams in security—blue teams, red teams, purple teams, etc. This talk is about the security team that few people think about but has the potential to be the most powerful and influential security team in the organization: the board of directors. Through in-depth interviews of board directors, CISOs, and senior-level consultants who advise boards on security, I illustrate challenges that CISOs face in meaningfully engaging with the board of directors. I also show how CISOs can gain strategic importance in supporting and advising the board. Finally, I describe ways that CISOs can help boards realize their potential as the most powerful security team in the organization.
Anthony Vance, Virginia Tech
12:05 pm–12:50 pm
Fireside Chat
Fireside Chat with Sheera Frenkel
Moderator: Bob Lord
Speaker: Sheera Frenkel, New York Times
In this fireside chat, Bob Lord will interview Sheera Frenkel, cybersecurity reporter for the New York Times, and author. Her latest book is "An Ugly Truth: Inside Facebook's Battle for Domination." Sheera and Bob will cover topics including cybersecurity, disinformation, hate speech, and journalism.
Bob Lord[node:field-speakers-institution]
Bob Lord most recently served as the first Chief Security Officer at the Democratic National Committee. In that role he worked to secure the Committee, as well as helping state parties and campaigns with their security programs. Previous roles include CISO at Yahoo, CISO in Residence at Rapid 7, and before that he headed up Twitter’s information security program as its first security hire. You can see some of his hobbies at https://www.ilord.com.
Sheera Frenkel, New York Times
Sheera Frenkel is an award-winning technology reporter based in San Francisco. In 2018, she was part of a team of New York Times reporters who were finalists for the Pulitzer Prize and in 2019 she won a Loeb award for her reporting on Facebook. In 2021, she and Cecilia Kang published, "An Ugly Truth: Inside Facebook's Battle for Domination." The book became a New York Times and International Bestseller.
Previously, she spent over a decade in the Middle East as a foreign correspondent, reporting for BuzzFeed, NPR, The Times of London and McClatchy Newspapers.
12:50 pm–2:05 pm
Lunch
2:05 pm–3:35 pm
Following the Rules
Session Chair: Wendy Seltzer, W3C
Healthcare Ecosystem: Security's Role in Helping HealthTech Find Its Way
Joy Forsythe, Alto Pharmacy
As the news fills up with ransomware attacks on health systems, the HealthTech startup space is booming. Why can’t our healthcare be as modern and friendly as ordering a pair of shoes or getting dinner reservations?
I made the leap from building enterprise security products to HealthTech startups 5 years ago with idealism about how technology could fix things and I’m here to share my "lessons learned." Like many other "disruptors," HealthTech companies are discovering that a lot of difficult security choices were made for a reason -- sometimes because the alternative is life-threatening -- or that alternative will take a long time to change.
Most importantly, healthcare is its own ecosystem that we have to understand before we can reason about it. Once I started to understand who the different entities were and how new startups fit in that system, I started to identify places where security can innovate and do better.
Joy Forsythe, Alto Pharmacy
The Global Privacy Control: Exercising Legal Rights at Scale
Justin Brookman, Consumer Reports
New privacy laws around the world are give consumers the right to stop unwanted processing of their personal information. But how can we be expected to tell thousands of different companies that we don't want our data sold? "Do Not Track" was an early effort to give consumers scalable privacy rights, but the effort foundered without the weight of the law behind it. Now, new privacy laws are both creating new rights and letting consumers delegate to others the ability to exercise those rights. Universal signals and global settings now may legally bind companies and expose them to liability for ignoring user preferences. The Global Privacy Control is one effort to allow consumers to transmit to every website they visit a request not to have their data shared with others. This and similar efforts may illuminate how to make privacy rights practically workable in the future.
Justin Brookman, Consumer Reports
Justin Brookman is the Director of Consumer Privacy and Technology Policy for Consumer Reports. Justin is responsible for helping the organization continue its groundbreaking work to shape the digital marketplace in a way that empowers consumers and puts their data privacy and security needs first. This work includes using CR research to identify critical gaps in consumer privacy, data security, and technology law and policy. Justin also builds strategies to expand the use and influence of the Digital Standard, developed by CR and partner organizations to evaluate the privacy and security of products and services.
Prior to joining CR, Brookman was Policy Director of the Federal Trade Commission’s Office of Technology Research and Investigation. At the FTC, Brookman conducted and published original research on consumer protection concerns raised by emerging technologies such as cross-device tracking, smartphone security, and the internet of things. He also helped to initiate and investigate enforcement actions against deceptive or unfair practices, including actions against online data brokers and digital tracking companies.
He previously served as Director of Consumer Privacy at the Center for Democracy & Technology, a digital rights nonprofit, where he coordinated the organization’s advocacy for stronger protections for personal information in the U.S. and Europe.
An Open-Source Taxonomy for Ex-ante Privacy
Cillian Kieran, Ethyca
Most current approaches to enterprise data privacy suffer from the ex-post nature of their application. Applications purporting to orchestrate crucial privacy tasks like access control, rights fulfillment, or risk assessment get bolted on to pre-existing systems and must dynamically respond to an underlying web of data flows that is poorly described, ever-evolving, and complex. It's a Sisyphean challenge that afflicts some of the most sophisticated technology enterprises operating today, to say nothing of non-digitally native legacy enterprises.
In this presentation, Cillian Kieran, Founder and CEO of Ethyca, will argue that the only way to meaningfully solve this important problem is to apply privacy protections at the start of the software delivery lifecycle rather than at the finish, and will propose one approach for doing so.
He'll demonstrate the benefits of ex-ante privacy by walking through a set of annotation and risk evaluation tools built on top of an open-source privacy taxonomy derived from the ISO/IEC 27701 standards. Cillian's presentation will show how an engineer can annotate projects, evaluate privacy risks in CI pipelines, and enable privacy rights to be enacted on data stored in annotated systems.
This will be a first public walkthrough of an open-source project that has been years development and has received interest from data engineering teams at some of the world's largest companies.
Cillian Kieran, CEO, Ethyca
3:35 pm–3:50 pm
Closing Remarks
Program Co-Chairs: Joe Calandrino, Federal Trade Commission, and Lea Kissner, Twitter