A Novel IoT-Based Explainable Deep Learning Framework for Intrusion Detection Systems
Résumé
The growth of the Internet of Things (IoT) is accompanied by serious cybersecurity risks, especially with the emergence of IoT botnets. In this context, intrusion detection systems (IDSs) proved their efficiency in detecting various attacks that may target IoT networks, especially when leveraging machine/deep learning (ML/DL) techniques. In fact, ML/DL-based solutions make “machine-centric” decisions about intrusion detection in the IoT network, which are then executed by humans (i.e., executive cyber-security staff). However, ML/DL-based solutions do not provide any explanation of why such decisions were made, and thus their results cannot be properly understood/exploited by humans. To address this issue, explainable artificial intelligence (XAI) is a promising paradigm that helps to explain the decisions of ML/DL-based IDSs to make them understandable to cyber-security experts. In this article, we design a novel XAI-powered framework to enable not only detecting intrusions/attacks in IoT networks, but also interpret critical decisions made by ML/DL-based IDSs. Therefore, we first build an ML/DL-based IDS using a deep neural network (DNN) to detect and predict IoT attacks in real time. Then we develop multiple XAI models (i.e., RuleFit and SHapley Additive exPlanations, SHAP) on top of our DNN architecture to enable more trust, transparency, and explanation of the decisions made by our ML/DL-based IDS to cyber security experts. The in-depth experiment results with well-known IoT attacks show the efficiency and explainability of our proposed framework.