Extracting Training Data from Large Language Models

Authors: 

Nicholas Carlini, Google; Florian Tramèr, Stanford University; Eric Wallace, UC Berkeley; Matthew Jagielski, Northeastern University; Ariel Herbert-Voss, OpenAI and Harvard University; Katherine Lee and Adam Roberts, Google; Tom Brown, OpenAI; Dawn Song, UC Berkeley; Úlfar Erlingsson, Apple; Alina Oprea, Northeastern University; Colin Raffel, Google

Abstract: 

It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.

We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {274574,
author = {Nicholas Carlini and Florian Tram{\`e}r and Eric Wallace and Matthew Jagielski and Ariel Herbert-Voss and Katherine Lee and Adam Roberts and Tom Brown and Dawn Song and {\'U}lfar Erlingsson and Alina Oprea and Colin Raffel},
title = {Extracting Training Data from Large Language Models},
booktitle = {30th USENIX Security Symposium (USENIX Security 21)},
year = {2021},
isbn = {978-1-939133-24-3},
pages = {2633--2650},
url = {https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting},
publisher = {USENIX Association},
month = aug
}

Presentation Video