2021
DOI: 10.48550/arxiv.2104.08793
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning

Abstract: Augmenting pre-trained language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks. Although some works have attempted to explain the behavior of such KG-augmented models by indicating which KG inputs are salient (i.e., important for the model's prediction), it is not always clear how these explanations should be used to make the model better. In this paper, we explore whether KG explanations can be used as supervision for teaching these KG-augmented models how to fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 39 publications
(55 reference statements)
0
1
0
Order By: Relevance
“…In some scenarios, gradient-based approaches were shown to provide more faithful explanations than attention-based methods [15]. This family of gradient-based explainability methods have been applied [16,17,30], yet in a task-specific manner, to different downstream tasks.…”
Section: Related Workmentioning
confidence: 99%
“…In some scenarios, gradient-based approaches were shown to provide more faithful explanations than attention-based methods [15]. This family of gradient-based explainability methods have been applied [16,17,30], yet in a task-specific manner, to different downstream tasks.…”
Section: Related Workmentioning
confidence: 99%