Carmela Troncoso, EPFL
In a machine-learning dominated world, users' digital interactions are monitored, and scrutinized in order to enhance services. These enhancements, however, may not always have the benefit and preferences of the users as a primary goal. Machine learning, for instance, can be used to learn users' demographics and interests in order to fuel targeted advertisements, regardless of people's privacy rights; or to learn bank customers' behavioral patterns to optimize the monetary benefits of loans, with disregard for discrimination. In other words, machine learning models may be adversarial in their goals and operation. Therefore, adversarial machine learning techniques that are usually considered undesirable can be turned into robust protection mechanisms for users. In this talk we discuss two protective uses of adversarial machine learning, and challenges for protection arising from the biases implicit in many machine learning models.
Carmela Troncoso, École Polytechnique Fédérale de Lausanne (EPFL)
Carmela Troncoso is an Assistant Professor at EPFL where she leads the Security and Privacy Engineering (SPRING) Laboratory. Her research focuses on privacy protection, with particular focus on developing systematic means to build privacy-preserving systems and evaluate these system's information leakage.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Carmela Troncoso},
title = {Keynote Address: {PETs}, {POTs}, and Pitfalls: Rethinking the Protection of Users against Machine Learning},
year = {2019},
address = {Santa Clara, CA},
publisher = {USENIX Association},
month = aug
}