2020
DOI: 10.48550/arxiv.2009.08770
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Probably Approximately Correct Explanations of Machine Learning Models via Syntax-Guided Synthesis

Abstract: We propose a novel approach to understanding the decision making of complex machine learning models (e.g., deep neural networks) using a combination of probably approximately correct learning (PAC) and a logic inference methodology called syntax-guided synthesis (SyGuS). We prove that our framework produces explanations that with a high probability make only few errors and show empirically that it is effective in generating small, human-interpretable explanations.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…Our approach for computing formal explanations from examples follows this idea. Neider et al [32] followed a similar direction and proposed to use a combination of probably approximately correct learning (PAC) and syntax-guided synthesis (SyGuS) to produce explanations that with a high probability make only few errors. In contrast to our work, [32] does not compute explanations for image data.…”
Section: Related Workmentioning
confidence: 99%
“…Our approach for computing formal explanations from examples follows this idea. Neider et al [32] followed a similar direction and proposed to use a combination of probably approximately correct learning (PAC) and syntax-guided synthesis (SyGuS) to produce explanations that with a high probability make only few errors. In contrast to our work, [32] does not compute explanations for image data.…”
Section: Related Workmentioning
confidence: 99%