Lakshmi Likhitha Mankali, New York University; Ozgur Sinanoglu, New York University Abu Dhabi; Satwik Patnaik, University of Delaware
Logic locking is a hardware-based solution that protects against hardware intellectual property (IP) piracy. With the advent of powerful machine learning (ML)-based attacks, in the last 5 years, researchers have developed several learning resilient locking techniques claiming superior security guarantees. However, these security guarantees are the result of evaluation against existing ML-based attacks having critical limitations, including (i) black-box operation, i.e., does not provide any explanations, (ii) are not practical, i.e., nonconsideration of approaches followed by the semiconductor industry, and (iii) are not broadly applicable, i.e., evaluate the security of a specific logic locking technique.
In this work, we question the security provided by learning resilient locking techniques by developing an attack (INSIGHT) using an explainable graph neural network (GNN). INSIGHT recovers the secret key without requiring scan-access, i.e., in an oracle-less setting for 7 unbroken learning resilient locking techniques, including 2 industry-adopted logic locking techniques. INSIGHT achieves an average key-prediction accuracy (KPA) of2.87×,1.75×,and1.67× higher than existing ML-based attacks. We demonstrate the efficacy of INSIGHT by evaluating locked designs ranging from widely used academic suites (ISCAS-85, ITC-99) to larger designs, such as MIPS, Google IBEX, and mor1kx processors. We perform 2 practical case studies: (i) recovering secret keys of locking techniques used in a widely used commercial EDA tool (Synopsys TestMAX) and (ii) showcasing the ramifications of leaking the secret key for an image processing application. We will open-source our artifacts to foster research on developing learning resilient locking techniques.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Lakshmi Likhitha Mankali and Ozgur Sinanoglu and Satwik Patnaik},
title = {{INSIGHT}: Attacking {Industry-Adopted} Learning Resilient Logic Locking Techniques Using Explainable Graph Neural Network},
booktitle = {33rd USENIX Security Symposium (USENIX Security 24)},
year = {2024},
isbn = {978-1-939133-44-1},
address = {Philadelphia, PA},
pages = {91--108},
url = {https://www.usenix.org/conference/usenixsecurity24/presentation/mankali},
publisher = {USENIX Association},
month = aug
}