Integrating Fuzzy Logic and Deep Neural Networks for Interpretable Decision-Making in Medical Expert Systems

Authors

  • Orwell Madureira Rowling AI Research Scientist – Medical Decision Support Systems, United States Author
  • Shelley T.S. Eliot Healthcare AI Engineer – Neuro-Fuzzy & Hybrid Models, United States Author

Keywords:

Medical Expert Systems, Fuzzy Logic, Deep Neural Networks, Explainable AI (XAI), Interpretable Machine Learning, Clinical Decision Support, Hybrid AI Models, Medical Diagnosis, Neuro-Fuzzy Systems, Transparency in AI

Abstract

Medical expert systems require both high diagnostic accuracy and interpretability to support trustworthy clinical decision-making. Deep Neural Networks (DNNs) have shown state-of-the-art performance across various medical prediction tasks, yet they lack transparency—an essential feature for medical applications. Conversely, fuzzy logic systems offer explainability through linguistic rule-based structures but are often limited in scalability and performance with complex data. This paper proposes an integrated framework combining the interpretability of fuzzy logic with the predictive power of deep neural networks. The hybrid model leverages fuzzy inference layers embedded within deep architectures to retain both transparency and accuracy. We evaluate this approach across diagnostic datasets, demonstrating that the proposed system maintains competitive accuracy while improving interpretability, offering a promising direction for future medical AI systems.

References

Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3), 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X

Jang, J. S. R. (1993). ANFIS: Adaptive-network-based fuzzy inference system. IEEE Transactions on Systems, Man, and Cybernetics, 23(3), 665–685.

Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The system causability scale (SCS). KI - Künstliche Intelligenz, 34(2), 193–198.

Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813.

Shortliffe, E. H. (1976). Computer-Based Medical Consultations: MYCIN. Elsevier.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD (pp. 1135–1144).

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In NeurIPS (pp. 4765–4774).

Rajkomar, A., et al. (2018). Scalable and accurate deep learning with electronic health records. NPJ Digital Medicine, 1(1), 18.

Esteva, A., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.

Zhang, Q., & Zhu, S. C. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27–39.

Choi, E., et al. (2016). Doctor AI: Predicting clinical events via recurrent neural networks. In MLHC.

Liang, G., et al. (2020). A transfer learning approach with deep convolutional neural network for COVID-19 diagnosis. IEEE Journal of Biomedical and Health Informatics, 24(10), 2734–2741.

Nguyen, A., et al. (2015). Deep neural networks are easily fooled. In CVPR.

Lee, J., et al. (2020). Explainable AI in healthcare: A systematic review. Medical Image Analysis, 65, 101802.

Gunning, D. (2017). Explainable artificial intelligence (XAI). DARPA Program Summary.

Downloads

Published

2025-01-10