Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Interpretability of artificial intelligence (AI) methods, particularly deep neural networks, is of great interest. This heightened focus stems from the widespread use of AI-backed systems. These systems, often relying on intricate neural architectures, can exhibit behavior that is challenging to explain and comprehend. The interpretability of such models is a crucial component of building trusted systems. Many methods exist to approach this problem, but they do not apply straightforwardly to the quantum setting. Here, we explore the interpretability of quantum neural networks using local model-agnostic interpretability measures commonly utilized for classical neural networks. Following this analysis, we generalize a classical technique called LIME, introducing Q-LIME, which produces explanations of quantum neural networks. A feature of our explanations is the delineation of the region in which data samples have been given a random label, likely subjects of inherently random quantum measurements. We view this as a step toward understanding how to build responsible and accountable quantum AI models.
Interpretability of artificial intelligence (AI) methods, particularly deep neural networks, is of great interest. This heightened focus stems from the widespread use of AI-backed systems. These systems, often relying on intricate neural architectures, can exhibit behavior that is challenging to explain and comprehend. The interpretability of such models is a crucial component of building trusted systems. Many methods exist to approach this problem, but they do not apply straightforwardly to the quantum setting. Here, we explore the interpretability of quantum neural networks using local model-agnostic interpretability measures commonly utilized for classical neural networks. Following this analysis, we generalize a classical technique called LIME, introducing Q-LIME, which produces explanations of quantum neural networks. A feature of our explanations is the delineation of the region in which data samples have been given a random label, likely subjects of inherently random quantum measurements. We view this as a step toward understanding how to build responsible and accountable quantum AI models.
We introduce the concept-driven quantum neural network (CD-QNN), an innovative architecture designed to enhance the interpretability of quantum neural networks (QNNs). CD-QNN merges the representational capabilities of QNNs with the transparency of self-explanatory models by mapping input data into a human-understandable concept space and making decisions based on these concepts. The algorithmic design of CD-QNN is comprehensively analyzed, detailing the roles of the concept generator, feature extractor, and feature integrator in improving and balancing model expressivity and interpretability. Experimental results demonstrate that CD-QNN maintains high predictive accuracy while offering clear and meaningful explanations of its decision-making process. This paradigm shift in QNN design underscores the growing importance of interpretability in quantum artificial intelligence, positioning CD-QNN and its derivative technologies as pivotal in advancing reliable and interpretable quantum intelligent systems for future research and applications.
Quantum generative models have shown promise in fields such as quantum chemistry, materials science, and optimization. However, their practical utility is hindered by a significant challenge: the lack of interpretability. In this work, we introduce model inversion to enhance both the interpretability and controllability of quantum generative models. Model inversion allows for tracing generated quantum states back to their latent variables, revealing the relationship between input parameters and generated outputs. We apply this method to models generating ground states for Hamiltonians, such as the transverse-field Ising model (TFIM) and generalized cluster Hamiltonians, achieving interpretability control without retraining the model. Experimental results demonstrate that our approach can accurately guide the generated quantum states across different quantum phases. This framework bridges the gap between theoretical models and practical applications by providing transparency and fine-tuning capabilities, particularly in high-stakes environments like drug discovery and material design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.