Yizheng Chen, Columbia University
Machine learning has shown impressive results in detecting security events such as malware, spam, phishing, and many types of online fraud. Though almost perfect accuracies are demonstrated in many research works, machine learning models are highly vulnerable to poisoning and evasion attacks. Such weaknesses severely limit the reliable application of machine learning in security-relevant applications.
Building robust machine learning models has always been a cat-and-mouse game, with new attacks constantly devised to defeat the defenses. Recently, a new paradigm has emerged to train verifiably robust machine learning models for image classification tasks. To end the cat-and-mouse game, verifiably robust training provides the ML model with robustness properties that can be formally verified against any possible bounded attackers.
Verifiably robust training minimizes the over-estimated attack success rate, utilizing the sound over-approximation method. Due to fundamental differences between ML models and traditional software, new sound over-approximation methods have been proposed to provide proofs for the robustness properties. In particular, soundness means that if no successful attacks can be found by the analysis, there indeed doesn’t exist any. If we can apply the training technique for security-relevant classifiers, we can train ML models with robustness properties on the worst-case behavior, even if the adversaries adapt the attacks after knowing the defense.
In this talk, I will discuss the following:
- What is verifiably robust training?
- What are the main challenges in applying verifiably robust training technique to security applications?
Yizheng Chen, Columbia University
Yizheng Chen is a Postdoctoral Researcher at Columbia University. She received her Ph.D. degree in Computer Science from Georgia Institute of Technology. She is interested in designing and implementing secure machine learning systems, and applying machine learning and graphical models to solve security problems.
author = {Yizheng Chen},
title = {Verifiably Robust Machine Learning for Security},
year = {2019},
address = {Santa Clara, CA},
publisher = {USENIX Association},
month = aug
}