2021
DOI: 10.48550/arxiv.2108.05149
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Logic Explained Networks

Abstract: The large and still increasing popularity of deep learning clashes with a major limit of neural network architectures, that consists in their lack of capability in providing human-understandable motivations of their decisions. In situations in which the machine is expected to support the decision of human experts, providing a comprehensible explanation is a feature of crucial importance. The language used to communicate the explanations must be formal enough to be implementable in a machine and friendly enough… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(16 citation statements)
references
References 47 publications
1
15
0
Order By: Relevance
“…The proposed approach could be refined by asking for supervision only for the predicates involved in the violated rules, reducing further the number of required labelled data and leading to a more balanced training set improving the prediction on smaller classes. At last, in case no knowledge is available on a certain problem, a first idea could be to pair the KAL strategy with the method proposed in Ciravegna et al (2021), where FOL explanations of network predictions are extracted on training data, to continuously check whether the knowledge learnt on the training distribution is also valid on unseen data.…”
Section: Discussionmentioning
confidence: 99%
“…The proposed approach could be refined by asking for supervision only for the predicates involved in the violated rules, reducing further the number of required labelled data and leading to a more balanced training set improving the prediction on smaller classes. At last, in case no knowledge is available on a certain problem, a first idea could be to pair the KAL strategy with the method proposed in Ciravegna et al (2021), where FOL explanations of network predictions are extracted on training data, to continuously check whether the knowledge learnt on the training distribution is also valid on unseen data.…”
Section: Discussionmentioning
confidence: 99%
“…Predicting tasks as a function of learnt concepts makes the decision process of deep learning models more interpretable [20,1]. In fact, learning intermediate concepts allows models to provide concept-based explanations for their predictions [17] which can take the form of simple logic statements, as shown by Ciravegna et al [26]. In addition, Koh et al [20] showed how learning intermediate concepts allows human experts to rectify mispredicted concepts through effective test-time interventions, thus improving model's performance and engendering human trust [1].…”
Section: Trust Through Concepts and Interventionsmentioning
confidence: 99%
“…The second CEM step consists of using the extracted concepts to make interpretable predictions for downstream tasks. In particular, the presence of concepts enables pairing GNNs with existing concept-based methods which are explainable by design, such as Logic Explained Networks (LENs, [26]). LENs are neural models providing simple concept-based logic explanations for their predictions.…”
Section: Interpretable Predictionsmentioning
confidence: 99%
“…Several studies highlight the benefits of concept-based machine learning for explainability [1,82,23,52] and human interactions [75]. To communicate the concepts to a human user, some approaches include first-order logic formulas [16], causal relationships [81], user defined concepts [40], prediction of intermediate dataset-labels [3,52], and one-hot encoded bottlenecks [42]. All of these approaches, however, focus on supervised concept learning.…”
Section: Related Workmentioning
confidence: 99%