Hidde Lycklama, ETH Zurich; Alexander Viand, Intel Labs; Nicolas Küchler, ETH Zurich; Christian Knabenhans, EPFL; Anwar Hithnawi, ETH Zurich
Recent advancements in privacy-preserving machine learning are paving the way to extend the benefits of ML to highly sensitive data that, until now, has been hard to utilize due to privacy concerns and regulatory constraints. Simultaneously, there is a growing emphasis on enhancing the transparency and accountability of ML, including the ability to audit deployments for aspects such as fairness, accuracy and compliance. Although ML auditing and privacy-preserving machine learning have been extensively researched, they have largely been studied in isolation. However, the integration of these two areas is becoming increasingly important. In this work, we introduce Arc, an MPC framework designed for auditing privacy-preserving machine learning. Arc cryptographically ties together the training, inference, and auditing phases to allow robust and private auditing. At the core of our framework is a new protocol for efficiently verifying inputs against succinct commitments. We evaluate the performance of our framework when instantiated with our consistency protocol and compare it to hashing-based and homomorphic-commitment-based approaches, demonstrating that it is up to 10^4× faster and up to 10^6× more concise.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.