Nidhi Rastogi, Rochester Institute of Technology
An automatic, contextual, and trustworthy explanation of cyberattacks is the immediate goalpost for security experts. Achieving it requires deep knowledge of the system under attack, the attack itself, real-time data describing environmental conditions. It also requires the ability to communicate in a way that the explanation evokes experts to trust. Automating the process of communicating contextual and trustworthy explanations of cyberattacks should also handle various attack models, although it adds to the existing challenge. However, a scientific approach to addressing explanations can generate a system that can offer the desired explanations under most use cases. In this presentation, we discuss the limitations of existing machine learning-based security solutions and how contextual security solutions can address them. We share specific use cases to support our argument. We present our research on contextual security (threat intelligence using knowledge graphs) and ongoing work on explanation-based security.