Explainable AI-Driven Zero-Trust Anomaly Detection for Encrypted Traffic
Team
-
E/20/288, Chalaka Perera, e20288@eng.pdn.ac.lk
-
E/20/420, Janith Wanasinghe, e20420@eng.pdn.ac.lk
-
E/20/449, Sandaru Wijewardhana, e20449@eng.pdn.ac.lk
Supervisors
-
Dr. Suneth Namal Karunarathna, namal@eng.pdn.ac.lk
-
Dr. Upul Jayasinghe, upuljm@eng.pdn.ac.lk
Table of content
- Abstract
- Related works
- Methodology
- Experiment Setup and Implementation
- Results and Analysis
- Conclusion
- Publications
- Links
Abstract
Modern cybersecurity is shifting toward encryption to protect data privacy, but this often blinds traditional Intrusion Detection Systems (IDS) that rely on payload inspection. Concurrently, the rise of cloud computing and remote work has made perimeter-based security obsolete, leading to the adoption of Zero-Trust Architecture (ZTA), which requires continuous verification of every entity. While Deep Learning (DL) models can detect anomalies in encrypted traffic without decryption by analyzing metadata, their “black-box” nature creates a trust deficit that hinders automated policy enforcement. This project proposes a framework integrating Encrypted Traffic Analysis (ETA) with Explainable AI (XAI) using SHAP to provide real-time, human-readable rationales for security decisions.
Related works
-
Encrypted Traffic Analysis (ETA): Research shows that flow-based features like packet timing and size can identify malware families with high accuracy. Methods like Convolutional Neural Networks (CNNs) treat traffic as images to capture spatial correlations.
-
Zero-Trust Architecture (ZTA): Studies emphasize that ZTA must extend beyond identity checks to evaluate connection quality in real-time. However, implementing mutual TLS and continuous authorization introduces significant CPU overhead.
-
Explainable AI (XAI): Techniques like SHAP and LIME are being adapted to cybersecurity to map AI decisions to frameworks like MITRE ATT&CK.
Methodology
The proposed framework utilizes a multi-stage pipeline:
-
Feature Extraction: Focuses on non-encrypted metadata including packet size, inter-arrival times, and TLS handshake parameters.
-
Detection Model: Employs Deep Dictionary Learning enhanced with Decision Trees or Isolation Forests.
-
XAI Integration: A SHAP-based engine provides real-time explanations for why a specific flow was flagged.
-
Policy Enforcement: Decisions feed back into the ZTA Policy Engine to dynamically adjust access (e.g., throttle, block, or step-up authentication).
Experiment Setup and Implementation
⚠️ Status: Currently in Progress
-
Dataset: Utilizing the CIC-IDS 2017/2018 datasets for training and validation.
-
Environment: Implementation involves Python-based Deep Learning frameworks and XAI libraries (SHAP).
-
Integration: Aiming for deployment in simulated environments to measure 10 Gbps+ network compatibility.
Results and Analysis
⏳ Status: Pending (Expected Feb 2026)
- Preliminary literature reviews indicate that SHAP can achieve high interpretability accuracy, but computational cost remains a challenge for real-time high-bandwidth networks.
Conclusion
This project identifies that XAI is the “missing piece” needed to make AI-based detection usable in automated Zero-Trust systems. By bridging the gap between detection, explanation, and automated policy creation, the framework aims to provide a practical solution for securing modern hidden data streams.
Publications
📝 Note: Documents will be linked as they become available.
- Perera, C., Wanasinghe, J., Wijewardhana, S. et al. “Explainable AI-Driven Zero Trust Anomaly Detection for Encrypted Traffic” (2025). (Not Published)