What Does It Mean for Machine Learning to Be Trustworthy?

Monday, January 27, 2020 - 4:30 pm5:00 pm

Nicolas Papernot, University of Toronto and Vector Institute

Abstract: 

The attack surface of machine learning is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security considerations in the design of ML algorithms. Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy.

Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like.

In this talk, we lay the basis of a framework that fosters trust in deployed ML algorithms. The approach uncovers the influence of training data on test time predictions, which helps identify poison in training data but also adversarial examples or queries that would potentially result in a leak of private information. Beyond immediate implications to security and privacy, we demonstrate how this helps interpret and cast some light on the model's internal behavior. We conclude by asking what data representations need to be extracted at training time to enable trustworthy machine learning.

Nicolas Papernot, University of Toronto and Vector Institute

Nicolas Papernot is an Assistant Professor of Electrical and Computer Engineering at the University of Toronto and Canada CIFAR AI Chair at the Vector Institute. His research interests span the security and privacy of machine learning. Nicolas received a best paper award at ICLR 2017. He is also the co-author of CleverHans, an open-source library widely adopted in the technical community to benchmark machine learning in adversarial settings, and TF Privacy, an open-source library for training differentially private models. He serves on the program committees of several conferences including ACM CCS, IEEE S&P, and USENIX Security. He earned his Ph.D. at the Pennsylvania State University, working with Professor Patrick McDaniel and supported by a Google Ph.D. Fellowship. Upon graduating, he spent a year as a research scientist at Google Brain.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@conference {244704,
author = {Nicolas Papernot},
title = {What Does It Mean for Machine Learning to Be Trustworthy?},
year = {2020},
address = {San Francisco, CA},
publisher = {USENIX Association},
month = jan
}

Presentation Video