Camille Francois and Sasha Costanza-Chock, Algorithmic Justice League and Harvard Berkman-Klein Center for Internet and Society
Bug bounty programs for security vulnerabilities have received a great deal of attention in recent years, accompanied by adoption from a wide variety of organizations and a significant expansion in the numbers of participants on major platforms hosting such programs. This talk presents the conclusions of a research effort by the Algorithmic Justice League, looking at the applicability of bug bounties and related vulnerability disclosure mechanisms to the discovery, disclosure, and redress of algorithmic harms. We present a typology of design levers that characterize these different programs in the information security space, and analyze their different tradeoffs. We scrutinize a recent trend of expanding bug bounty programs to socio-technical issues, from data abuse bounties (Facebook, Google) to algorithmic biases (Rockstar Games, Twitter). Finally, we use a design justice lens to evaluate what the algorithmic harms space could borrow from these programs, and reciprocally, what traditional bug bounty programs could learn from the burgeoning algorithmic harms community.