sponsors
usenix conference policies
Implications of Adversarial Learning for Security and Privacy
Rachel Greenstadt, Drexel University
While machine learning is a powerful tool for data analysis and processing, traditional machine learning methods were not designed to operate in the presence of adversaries. They are based on statistical assumptions about the distribution of the input data, and they rely on training data derived from the input data to construct models for analyses. Adversaries may exploit these characteristics to disrupt analytics, cause analytics to fail, or engage in malicious activities that fail to be detected.
While these vulnerabilities pose a challenge to using machine learning for security applications, they may also pose opportunities to disrupt privacy invasive learning systems. We will discuss techniques, challenges, and future research directions for reverse engineering analytics, secure learning and learning-based security applications.
author = {Rachel Greenstadt},
title = {Implications of Adversarial Learning for Security and Privacy},
year = {2016},
address = {Austin, TX},
publisher = {USENIX Association},
month = aug
}
connect with us