USENIX Security '25 Ethics Guidelines

This document is a companion to the USENIX Security call for papers and the submission policies and instructions documents and provides additional background and suggestions for ethical considerations. This document is meant as one possible starting point to help authors make sound ethical decisions and communicate about those decisions. While it is required that authors thoroughly consider ethics and act ethically, precisely following the guidelines in this document is not required. Additionally, while a possible starting point for ethical considerations, the guidelines herein are not exhaustive; authors are encouraged to think critically about ethics and not be limited to the guidelines in this document. Even if authors choose to follow different procedures for identifying and addressing ethics-related concerns, they are asked to read this document in its entirety because this document also reflects USENIX Security's perspective on ethics. Authors are also encouraged to familiarize themselves with The Menlo Report and the other resources cited herein, which are additional resources that can help inform considerations of ethics.

When considering ethics, authors are encouraged to consider the full spectrum of stakeholders, not just the most obvious stakeholder.

The call for papers mentions at least two broad categories of potentially negative outcomes from the research and publication process: tangible harms (e.g., financial loss or exposure to psychologically disturbing content) and violations of human rights even if there are no directly tangible harms (e.g., the violation of a participants' right to informed consent or the violation of users' right to privacy via the study of data that users expect and desire to be private). A focus on mitigating direct harms, and weighing harms against benefits, is grounded in consequentialist ethics; see the "Beneficence" principle in The Menlo Report and the discussion of consequentialist ethics in the 2023 USENIX Security paper, "Ethical Frameworks and Computer Security Trolley Problems." A focus on avoiding the violation of individuals' rights is grounded in deontological ethics; see the "Respect for Persons" principle in The Menlo Report and the discussion of deontological ethics in the above-cited 2023 USENIX Security paper.

Authors are encouraged to consider each of the principles in The Menlo Report in the context of each identified stakeholder: "Beneficence", "Respect for Persons", "Justice", and "Respect for Law and Public Interest". As authors consider the "Beneficence" principle, they are encouraged to familiarize themselves with discussions of consequentialist ethics. As the authors consider the "Respect for Persons" principle, they are encouraged to familiarize themselves with discussions of deontological ethics. The above-cited 2023 USENIX Security paper provides an accessible introduction to consequentialist and deontological ethics targeted at the computer security community.

In some cases, the ethics analyses under multiple principles will lead to the same conclusion for what is "right", e.g., the "Beneficence" and "Respect for Persons" analyses would agree. In other cases, the analyses may lead to different conclusions. If multiple analyses lead to the same conclusions, then documenting all those analyses will provide greater confidence in the ethics of the research. If different analyses lead to different conclusions on the ethics of the research, then the authors are encouraged to clearly articulate how and why they chose the path they did, even if some principles would have led to a different decision. In some cases, researchers may need to make assumptions about the likelihood of different outcomes or the likely impacts of different decisions; in such cases, the authors are encouraged to articulate and justify all assumptions they make.

When considering ethics, researchers and reviewers must acknowledge that, sometimes, the most ethical path is not to do the research or not to publish the research after it is complete.

Further, authors are encouraged to consider ethics proactively and as early as possible in the research process. By proactively considering ethics early, it is sometimes possible to avoid more challenging and complicated ethical questions in the future.

Given that different approaches to ethical considerations can lead to different decisions, authors should not pick the decision that they want to make and then find the ethics argument that supports it. Rather, authors should be as objective as possible and ask themselves: How would someone not involved in the research evaluate the ethics of the research?

Authors should also not simply look at past "similar" works and assume that the ethics analyses for those past works apply, directly, to their new works. Different situations may have subtle differences that, upon closer investigation, lead to significantly different conclusions. And, as the community learns more about the ethical implications of actions, and as technologies and societies and knowledge change, what might have been ethical in the past may no longer be ethical. For example, in the past, an action might have resulted in significant benefits that outweighed the harms but now, given knowledge from past results and/or differences in technologies, the benefits of the same action today might not outweigh the harms.

Authors should leverage all available resources when making ethical decisions. For example, if the submitted research has potential to create negative outcomes and authors have access to an Institutional Review Board (IRB), then authors are encouraged to consult this IRB and document its response and recommendations in the paper. In some parts of the world, and in some situations, consulting with the IRB may be required. IRBs are not, however, expected to understand computer security research well or to know about best practices and community norms in our field, and so IRB approval does not absolve researchers from considering ethical aspects of their work. In particular, IRB approval is not sufficient to guarantee that the PC will not have additional concerns with respect to potential negative outcomes associated with the research. Hence, the discussion of IRB approval (if relevant) will likely only be a subset of the ethics discussion. (If authors do not have access to an IRB but are doing human subjects-related research for which IRB approval might be required elsewhere and of other researchers, then the authors are encouraged to explicitly state that they do not have access to an IRB and, instead, focus on what mechanisms they used as they considered and addressed ethical considerations.)

Below, we consider in more depth several examples of ethical considerations that have come up in the past. These should be viewed as examples, however, and not an exhaustive list of potential concerns or considerations. And, as noted above, different situations, even if in many ways similar, may have unique considerations.

Disclosures. Vulnerabilities, if known to adversaries, can expose people to negative outcomes, such as harms or rights violations. Publicly disclosing vulnerabilities before they have been privately disclosed to the responsible parties, and hence before they have been mitigated, can therefore expose people to negative outcomes. Adversaries or others can also independently discover vulnerabilities. The potential for independent adversary discovery means that knowing about vulnerabilities but not disclosing them to the responsible parties can also result in exposing people to negative outcomes. Additionally, in some cases it can take the responsible parties time to develop mitigations. Therefore, once a vulnerability has been discovered, it is important to initiate the mitigation process as early as possible. Specifically, absent strong and convincing reasons otherwise, we expect researchers to disclose vulnerabilities as soon as they are discovered. If the researchers believe that a different timeline is the most ethical in their situation, they should present clear and convincing arguments for that different timeline. The arguments should clearly articulate why a delayed disclosure is in the best interest of users or people in general, e.g., most supportive of these people's wellbeing or least likely to violate their rights. Submissions that fail to disclose prior to submission and that do not present convincing ethical arguments for delaying disclosure may be rejected or may receive a revision decision.

Often, the most direct path for vulnerability mitigation is to disclose the vulnerability to the responsible party, e.g. the manufacturer. In some cases, for example when the vulnerability is widespread or the mitigation process involves coordination with many organizations, the most ethical course of action may be to leverage organizations that coordinate vulnerability disclosure, such as the CISA in the United States, rather than or in addition to disclosing to affected parties directly.

Experiments with live systems without informed consent. Researchers testing live services (e.g., for vulnerabilities) such as web services or APIs that give access to otherwise non-public algorithms or models must also consider ethics. Such experiments should only be performed after carefully analyzing the potential negative outcomes to the service provider, which may include cost (of CPU cycles or of human effort) or corrupting system state, and to end users who are using the same service provider for non-research purposes. That similar experiments might have been performed in the past does not automatically justify performing them again. Researchers should identify ways to minimize even small risks of negative outcomes, including by considering alternate methods (even if these are more difficult to carry out) and scaling down experiments. Papers describing such experiments should describe the analysis that lead to a particular methodology being used and justify its necessity.

Terms of service. If experiments violate terms of service, the justification for violating them should be discussed in the paper.

Deception. In most cases, participants should be fully informed of the purposes and risks (among other things) of participating in experiments. If deception is to be used, the necessity of doing so should be carefully considered; participants should be debriefed afterward to explain the necessity of the deception, even when the deception was mild.

Wellbeing for team members. In some cases, research activities have the potential to negatively impact team members. For example, research on hate speech could expose team members to disturbing content and negatively impact their psychological wellbeing. Or, crawling morally questionable websites from a home network could cause an ISP to (incorrectly) make inferences about the researcher that may not be true or that may be undesirable to the researcher. Thus, research teams must carefully consider the wellbeing of their researchers as well.

Innovations with both positive and negative potential outcomes. Technologies that can positively impact one stakeholder group may negatively impact those same or other stakeholder groups. For example, advancements in anonymity systems could positively impact people that need anonymity under repressive regimes or excessive surveillance. At the same time, the mere use of those technologies could create negative impacts to those same people if the use of such technologies is detectable and hence subjects those individuals to additional scrutiny. And, those same anonymity technologies could also be used by illicit actors to conceal their activities. Likewise, advances in program analysis that can facilitate more rapid vulnerability finding could be used by both defenders and by adversaries. And, as an additional example, new insights into how and why some people become vulnerable to phishing could be used by both defenders and adversaries. Thus, researchers should think broadly about both the positive and negative potential impacts of their research throughout the research process, including during project selection and publication.

Retroactively identifying negative outcomes. While research teams should strive to proactively identify and address all ethics-related concerns before commencing their research and proactively address any new concerns that arise about the project's next steps during the research, in some cases research teams may discover post facto that their past research activities had unexpected and previously unknown (to the researchers) negative outcomes. Handling such situations is always difficult. While one might think that an appropriate response is to ignore and simply not talk about those past activities, doing so does not change the fact that negative outcomes did happen. In general, we believe that research teams should take ownership of any past negative outcomes that their research created, document such outcomes, and discuss what steps, if any, the researchers have taken to remediate those past negative outcomes and/or ensure that the potential for such negative outcomes are proactively addressed in the future, both for themselves and as guides for future researchers.

While this discussion provides a possible path forward for projects that retroactively identify negative outcomes and ethics-related concerns, simply following the suggestion above, making remedies, and documenting plans for the future does not guarantee that the PC will not have ethics-related concerns sufficient for paper rejection. Additionally, the following is explicitly considered unethical: identifying prior to the research that an activity might have negative outcomes, doing the activity anyway, and then documenting how researchers might avoid such negative outcomes in the future.

The law. In addition to considering ethics, we encourage authors to fully consider the legality of their research.



Back to Call for Papers