Suppose that computer security researchers find a vulnerability in a wireless implantable medical device made by a now out-of-business manufacturer. Because the manufacturer is out of business, it is impossible to patch the vulnerability. Suppose that there is an exceedingly low — for simplicity, let's assume near zero — chance of the vulnerability ever being exploited regardless of whether or not the vulnerability is disclosed to the public. Additionally, suppose that if the public learns about the vulnerability, the media will exaggerate the risks, some patients may become unnecessarily concerned, and, as a result, some patients might remove the device from their bodies and (unnecessarily) lose the health benefits. Other patients might choose to keep the devices in their bodies but might live in fear of the possibility of device exploitation.
What should the researchers do?
- Even though the disclosure could result in patient harms, should the researchers disclose the vulnerability to the government and the public, thereby respecting patients' right to be informed and to make their own decisions about their own bodies?
- Or, knowing that there is an exceedingly low probability of adversaries ever manifesting and that knowledge of the vulnerability's existence could harm patients, should the researchers not disclose the vulnerability to the government and the public, thereby keeping the vulnerability out of the public (and media's) eye and avoiding patient harm?
The above is a slight rephrasing of a scenario in our USENIX Security 2023 paper titled "Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversations" [15]. Real-world scenarios are, naturally, significantly more complex; we intentionally prepared a simplified scenario in order to focus our research and writings on moral decision making. We created an informal poll, asking respondents what decision they would make if they were the researchers in the above scenario. Opinions were split. This split of opinions on what is "right" or "best" speaks to a fundamental challenge with moral decision making: different people have different approaches to ethics and morality, and those different approaches can lead to different conclusions.
In writing our USENIX Security 2023 paper, our goal was not to argue for a specific approach to or worldview on ethics and morality. We did not seek to define what constitutes "right" or "wrong" decisions within the computer security research field. Rather, in our cross-disciplinary collaboration spanning computer security (Kohno and Acar) and applied ethics (Loh), we sought to contribute to how the field discusses moral questions, and we did so by building upon the foundations developed in the field of ethics and moral philosophy.
We encourage the readers of this article to view it as a primer for our conference paper [15] or the full version of that conference paper [16]. Specifically, in this article, we (1) present a classic moral dilemma known as a "trolley problem", (2) summarize two classic ethical frameworks (consequentialist and deontological ethics), and (3) briefly explore the application of these frameworks to the medical device scenario above.
Ethicists / moral philosophers have, for generations, proposed dilemmas for ethical debate and consideration. A classic family of dilemmas are the "trolley problems"; see the callout below for an example. Trolley problems present a choice between two options, both of which contain undesired aspects. As a result, different ethical frameworks may present different answers to such dilemmas. Some authors (among them Philippa Foot herself, who came up with the original trolley problem [11]) use trolley problems to show that people's moral intuitions can diverge in important cases.
Philosophers and psychologists have studied people's responses to trolley problems such as in the callout below and, indeed, there is no universal consensus for what constitutes the morally correct action of the trolley operator [11]. In psychology studies, for example, differences can arise due to the moral intuitions and values of the participant and may vary by culture, e.g., [2, 6, 7, 12, 17, 24]. Different ethical frameworks reflect differences in people's moral intuitions. When developing and articulating ethical dilemmas, a key goal is to find scenarios in which ethical frameworks fundamentally diverge. We reflect on the development of computer security-themed trolley problems in the conference and full versions of this paper [15, 16].
The trolley problem is a classic thought experiment / ethical dilemma.
Context:
- A runaway trolley with no brakes is heading straight along a set of tracks.
- Five people are tied to those tracks.
- One person is tied to an alternate set of tracks.
- A trolley operator has the ability to change the trolley's path and make it head down the alternate set of tracks.
The choice for the trolley operator:
- Do nothing: Five people die.
- Make the trolley take the alternate set of tracks: One person dies.
Ethical frameworks define approaches for reasoning about what is a morally right or wrong action, or what is a morally "good" or "bad" outcome. Consequentialist and deontological ethics are two of today's leading categories of frameworks. Consequentialist ethics centers questions about the impacts (consequences) of different decisions. Under consequentialist ethics, one focuses on assessing the benefits and harms of different options before making a decision that maximizes net benefits. Deontological ethics centers questions about duties (deon) and rights. Under deontological ethics, one focuses on asking what one owes others and, respectively, what rights different stakeholders have, e.g., a right to privacy or a right to autonomy.
We center consequentialist and deontological ethics — and in particular utilitarianism and Kantian deontological ethics, respectively — in our work because of (1) their prominence in the field of ethics / moral philosophy, and because of (2) their existing impact on the computer security research field's approach to ethics and morality (e.g., the Menlo Report [22], which provides ethical guidance to computer security researchers, derives from the Belmont Report [21], which itself embeds both consequentialist and deontological elements). We stress, however, that both consequentialist and deontological ethics have limitations and that by centering them we are not arguing that anyone adopt a strict consequentialist or deontological perspective. At a minimum, one might include considerations from both frameworks, as the Menlo Report [22] does. Additionally, much of philosophy's discussion of consequentialist and deontological ethics centers a Western perspective. While Western frameworks encompass ethical considerations that are often also part of non-Western traditions (e.g., about duties towards each other, the nature of fundamentally relating to each other, the outcomes of actions / policies, and so on), each tradition has its own unique history and elements. Although outside the scope of this work, we encourage the computer security research community to gain greater familiarity with other frameworks as well.
The callout below provides a more precise formulation of the scenario described in the introduction to this article. For simplicity of analysis and exposition — and to enable our discussion to focus on the differences between the ethical frameworks and not on numerical analyses — we assume that the likelihood of exploitation is not just near zero, but in fact zero regardless of whether or not adversaries know about the vulnerability. The conference and full versions of our paper discuss how to analyze this scenario if the probability is small but non-zero [15, 16].
Consequentialist Ethics. Different approaches to utilitarian ethics can center different definitions of utility, including health (physical or psychological) and happiness. Thus, we analyze this scenario under different definitions of utility.
If a patient chooses to remove a device or chooses not to obtain one because of a known vulnerability, then they would have a shorter life expectancy — a negative impact on physical health. If a patient knows about the vulnerability and still chooses to keep or get the implant, then they could live in fear of a security incident even though the likelihood of an incident is zero — a negative impact on psychological health. From a happiness perspective, the knowledge that one has a shorter life expectancy (if they do not have the device) or the fear of a security incident (if they have the device) could lead to decreased happiness. In addition, the removing of or not opting for the device will result in ten years less of potential happiness, which may thus also significantly decrease overall happiness of people in society.
Hence, when evaluating this scenario under several key definitions of utility, the morally correct decision is to not disclose the vulnerability.
Deontological Ethics. Under deontological ethics, the researchers have a duty to respect people's right to informed consent and the right to self-agency. In the medical context, this right to informed consent manifests (for example) as warnings in TV advertisements for medicines. These are fundamental human rights, and not disclosing the vulnerability would violate those rights. Hence, the morally correct decision is to disclose the vulnerability.
Informed by the Real World, Not Real. We discuss the background behind our medical device scenario more in the conference and full versions of our paper [15, 16]. There are gaps between this scenario and what one might encounter in the real world. Rather than choose from one of only two options, the researchers might, for example, choose to involve others in the decision-making process or cede the decision responsibility to another entity entirely. In the U.S., the FDA — not the researchers — could make or strongly contribute to the decision on whether to disclose the vulnerability to the public. Should they choose to disclose the vulnerability, they might work with healthcare providers to thoughtfully and conscientiously craft the message, thereby reducing patient alarm. Given the medical and security contexts, the decision-makers might leverage the Principles of Biomedical Ethics [5] and the Menlo Report [22]. Thus, even if the decision-makers do not solely rely on consequentialist or deontological analyses, and indeed consequentialist and deontological ethics both have limitations, consequentialist and deontological thinking may be part of the final decision-making process. Further, in the real world, the researchers must consider a non-zero (even if still small) probability of exploitation.
The following is a computer security scenario in which vulnerabilities are found in an unsupported medical device.
Context:
- Company A produces a lifesaving wireless implantable medical device. It is the only device of its type ever invented. When a patient receives this device, it will (on average) extend their lifespan by ten years.
- Company A goes bankrupt and closes due to poor financial practices, including a failure to calculate the market size and the costly manufacture of hundreds of thousands of devices before they were needed.
- At the time of Company A's bankruptcy, approximately 85000 people in the United States use Company A's device, and many more people globally.
- Doctors continue to implant the surplus of (now unsupported) device in new patients.
- Shortly after Company A closes, researchers discover a software vulnerability in the device. If exploited, the vulnerability could cause significant harm to the patients. Since Company A no longer exists, the software cannot be updated to address this vulnerability.
- The researchers know that there is zero probability that the vulnerability will ever be exploited even if the vulnerability is disclosed to the public.
- The computer security research field and the healthcare industry have already internalized the importance of computer security for wireless implantable medical devices; there are no field- or industry-wide gains to be made by disclosing the vulnerabilities to the public.
The choice for the researchers:
- Not disclose the vulnerability to anyone: Patients will have no awareness that their device is vulnerable; patients will keep and / or proceed with obtaining the device and receive significant health benefits.
- Disclose the vulnerability to the healthcare industry, patients, and the public: Patients will have the choice to remove or not receive the device; there is a risk of health harm to patients if patients remove and / or do not receive the device; there is a risk of psychological harm to patients and loved ones if patients know that they have a vulnerable device in their bodies (even if they also are told that the likelihood of compromise is zero); given the psychological harms, most patients would have preferred not to have learned about the vulnerability.
We hope that readers of this article will find the USENIX Security 2023 conference version of this paper interesting and informative [15]. In that paper, we consider our medical device scenario (above) as well as two additional scenarios. We present even more scenarios in the full version of our conference paper, available online [16].
Even with the full version of our paper, our goal was not to present a set of scenarios that represent the entire spectrum of moral issues that one might encounter within the computer security research field. Rather, we sought to develop scenarios that would allow us to explore and articulate the application of two foundational frameworks — consequentialist and deontological ethics — to computer security scenarios. Such exploration and articulation contribute to our high-level goal: to contribute to the computer security field's thoughtful and informed conversations about ethics and morality. Toward contributing to community conversations, it bears stressing that we are not advocating for strict adherence to either of the frameworks that we use in our analyses. In fact, it is not uncommon for people — including modern ethicists — to include elements of multiple frameworks (consequentialist, deontological, and other) as they reason through decisions. Rather, we believe that an understanding of the frameworks and how they can diverge in important cases will contribute to more informed conversations within the community.
We close this short article with a collection of pointers for those seeking to learn more. Within the computer security research field, one foundational document is the Menlo Report [22]. For writings on ethics and moral philosophy, we recommend Anscombe's article "Modern Moral Philosophy" [3], Baggini and Fosl's book The Ethics Toolkit [4], Deigh's book An Introduction to Ethics [8], Driver's book Ethics: The Fundamentals [9], and Stanford University's online resources [20] for additional, general information. For works focused on ethics and technology / engineering, we refer readers to works such as Floridi's book The Cambridge Handbook of Information and Computer Ethics [10], Iphofen's book Handbook of Research Ethics and Scientific Integrity [14], Quinn's book Ethics for the Information Age [19], and Santa Clara University's online resources [23], as well as professional codes of ethics [1, 13, 18].
This work was supported in part by the U.S. National Science Foundation under awards CNS-2205171 and CNS-2206865, the University of Washington Tech Policy Lab (which receives support from the William and Flora Hewlett Foundation, the John D. and Catherine T. MacArthur Foundation, Microsoft, and the Pierre and Pamela Omidyar Fund at the Silicon Valley Community Foundation), and gifts from Google, Meta, Qualcomm, and Woven Planet. We are grateful to everyone who contributed to this project. Thank you to all who offered comments, questions, insights, and conversations, hosted talks, and reviewed preliminary drafts, including Lujo Bauer, Hauke Behrendt, Dan Boneh, Kevin Butler, Aylin Caliskan, Inyoung Cheong, Lorrie Faith Cranor, Sauvik Das, Zakir Durumeric, Kevin Fu, Alex Gantman, Gennie Gebhart, Kurt Hugenberg, Umar Iqbal, Apu Kapadia, Erin Kenneally, David Kohlbrenner, Seth Kohno, Phil Levis, Rachel McAmis, Alexandra Michael, Bryan Parno, Elissa Redmiles, Katharina Reinecke, Franziska Roesner, Stefan Savage, Stuart Schechter, Sudheesh Singanamalla, Patrick Traynor, Emily Tseng, and Miranda Wei. We thank the anonymous USENIX Security 2023 reviewers for their insightful feedback, comments, and suggestions. We thank Rik Farrow for shepherding this article. We also sincerely thank all attendees of past presentations about this work.