Integrating Neuro Symbolic AI for Pneumonia Diagnosis from Chest X-rays: A Synergistic Approach

Team

-E/19/074,Dharmarathne B.A.M.I.E, email -E/19/424,Weerasinghe H.A.S.N, email -E/19/405,Thennakoon T.M.R.S, email

Supervisors

-Dr.Sampath Deegalla, email -Dr.Damayanthi Herath, email

Table of Contents

  1. Abstract
  2. Related Works
  3. Methodology
  4. Experiment Setup and Implementation
  5. Results and Analysis
  6. Conclusion
  7. Future Work
  8. Publications
  9. Contributors
  10. Acknowledgements
  11. Links

Abstract

Pneumonia remains a major global health concern, especially among vulnerable populations. While early detection and accurate diagnosis are crucial for effective treatment, reliance on expert radiologists and the limitations of existing diagnostic models have prompted the need for more interpretable and reliable AI-based solutions.

In this project, we introduce a novel hybrid architecture that integrates Convolutional Neural Networks (CNNs) with Symbolic AI to detect pneumonia from chest X-ray images. While CNNs excel at pattern recognition and feature extraction, they are often criticized for their “black-box” nature, which limits their application in clinical settings where interpretability is essential. By incorporating symbolic reasoning (using either rule-based systems or knowledge graphs), our proposed model not only predicts pneumonia but also provides a logical explanation of the results—making AI predictions more transparent and trustworthy to healthcare professionals.

This synergistic approach addresses the gaps in existing models by enhancing interpretability, improving generalizability with limited data through transfer learning, and ensuring real-world relevance through domain-specific symbolic logic. Our work paves the way for safe, transparent, and efficient AI applications in medical diagnostics.

Keywords: Convolutional Neural Networks (CNNs), Neuro-Symbolic AI, Pneumonia Detection, Transfer Learning, Explainable AI (XAI), Model Interpretability, Symbolic Reasoning, Chest X-rays, Healthcare AI


Significant work has been done using CNNs for pneumonia classification. Popular models include CheXNet, ResNet, DenseNet, and VGG16. These models have achieved impressive accuracies ranging from 96% to 99% when trained on public datasets such as ChestX-ray14 or Kaggle’s pneumonia datasets.

Despite their performance, CNNs are inherently non-transparent. In medical applications, this “black box” nature undermines trust. Studies have introduced Grad-CAM and Grad-CAM++ for visual interpretation, but these techniques still fall short of providing logical reasoning that aligns with clinical diagnosis procedures.

Symbolic AI uses formal logic-based approaches to encode expert knowledge. Although rarely applied in medical imaging, symbolic AI has shown promise in domains like NLP and program synthesis. Models like Logic Tensor Networks and DeepProbLog demonstrate how combining neural and symbolic methods can yield both accuracy and explainability.

Recent literature highlights a growing trend in hybrid neuro-symbolic systems. While research has explored these models for NLP and simple classification tasks, few studies have attempted to use such approaches for high-stakes applications like pneumonia diagnosis. Our project aims to fill this gap by developing a trustable AI system for radiological diagnosis.


Methodology

Our project proposes a hybrid pipeline that combines the predictive power of CNNs with the interpretability of symbolic AI. The workflow is structured into the following components:

Neural Network-Based Feature Extraction

Symbolic Reasoning Layer

Evaluation Metrics


Experiment Setup and Implementation

Dataset

Our project utilizes the “Chest X-Ray Images (Pneumonia)” dataset available on Kaggle. This dataset is widely used in pneumonia detection research and consists of high-quality pediatric chest X-ray images collected from the Guangzhou Women and Children’s Medical Center.

Local datasets will be integrated as ethical approvals are completed, ensuring broader generalization across demographics and imaging hardware.

Tools and Technologies

Category Tools/Frameworks
Language Python
DL Frameworks TensorFlow, Keras
Image Processing OpenCV, PIL
Symbolic Reasoning Prolog, Custom Rule Engine
Explainability SHAP, LIME, Grad-CAM
Evaluation Scikit-learn, ROC-AUC, K-Fold CV
Deployment FastAPI, TensorFlow Serving
Visualization Matplotlib, Seaborn

Results and Analysis

CNN-Only Model

Neuro-Symbolic Hybrid Model

These results highlight the trade-off between pure accuracy and real-world interpretability. Clinicians preferred models that explained their decisions over those with slightly higher accuracy.


Conclusion

This project demonstrates that hybrid Neuro-Symbolic AI models can bridge the gap between accuracy and explainability in medical diagnostics. The combination of CNN-based feature extraction and symbolic reasoning enables transparent and trustworthy decision-making, essential in clinical applications. Our work sets a foundation for broader AI adoption in high-risk domains by enhancing both prediction and understanding.


Future Work


Contributors

Supervisors:
Dr. Sampath Deegalla
Dr. Damayanthi Herath
Department of Computer Engineering, University of Peradeniya


Acknowledgements

We sincerely thank the Department of Computer Engineering, University of Peradeniya, for the infrastructure and academic support. Special thanks to our supervisors for their expert guidance and to medical professionals who contributed to rule design and dataset review.