1 CSE, City University, Bangladesh.
2 CSE, Daffodil International University.
3 Institute of Social Welfare And Research, University of Dhaka.
4 Department of Physics, University of Dhaka.
5 CSE, Southeast University.
International Journal of Science and Research Archive, 2025, 17(03), 1133-1145
Article DOI: 10.30574/ijsra.2025.17.3.3376
Received on 12 November 2025; revised on 29 December 2025; accepted on 31 December 2025
This paper explores the critical role of explainable artificial intelligence (XAI) in bridging the gap between the high performance of deep learning models and the need for human interpretability. It investigates methods that enhance transparency and trust by providing meaningful explanations of complex model decisions, thereby addressing challenges posed by the black-box nature of deep neural networks. The study highlights the importance of developing interpretable AI systems to foster user trust and facilitate the integration of AI into sensitive domains such as healthcare and finance. Ultimately, this research aims to advance the understanding and implementation of XAI to ensure responsible and effective AI deployment in the modern era.
Explainable Artificial Intelligence; Deep Learning Interpretability; Transparent AI Models; Human-Centered AI; Trustworthy Machine Learning
Get Your e Certificate of Publication using below link
Preview Article PDF
Razibul Islam Khan, Mohammad Quayes Bin Habib, Md. Abdur Rahim, Asit Debnath and Md. Mahedi Hasan. Explainable Artificial Intelligence: Bridging the gap between deep learning and human interpretability. Journal of Science and Research Archive, 2025, 17(03), 1133-1145. Article DOI: https://doi.org/10.30574/ijsra.2025.17.3.3376
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0







