2023
DOI: 10.52783/cienceng.v11i1.286
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Deep Learning Models: Enhancing Transparency and Trustworthiness in Explainable AI

Abstract: Explainable AI (XAI) aims to address the opacity of deep learning models, which can limit their adoption in critical decision-making applications. This paper presents a novel framework that integrates interpretable components and visualization techniques to enhance the transparency and trustworthiness of deep learning models. We propose a hybrid explanation method combining saliency maps, feature attribution, and local interpretable model-agnostic explanations (LIME) to provide comprehensive insights into the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 14 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?