Findings of the Association for Computational Linguistics: NAACL 2022 2022
DOI: 10.18653/v1/2022.findings-naacl.131
|View full text |Cite
|
Sign up to set email alerts
|

Great~Truths~are ~Always ~Simple: A Rather Simple Knowledge Encoder for Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models A Rather Simple Knowledge Encoder for Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models

Abstract: Commonsense reasoning in natural language is a desired ability of artificial intelligent systems. For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models (PTMs) with a knowledge-aware graph neural network (GNN) encoder that models a commonsense knowledge graph (CSKG). Despite the effectiveness, these approaches are built on heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs. Considering thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…GNNs are commonly used to model external KGs for conducting joint reasoning with LMs. Some works (Feng et al 2020;Yasunaga et al 2021;Jiang et al 2022) use the information of one modality to augment another modality. The most representative work, the QA-GNN (Yasunaga et al 2021), adds language representation as a new node to the retrieved KG and employs an elaborate GNN to jointly update the LM and GNN representations via message passing.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…GNNs are commonly used to model external KGs for conducting joint reasoning with LMs. Some works (Feng et al 2020;Yasunaga et al 2021;Jiang et al 2022) use the information of one modality to augment another modality. The most representative work, the QA-GNN (Yasunaga et al 2021), adds language representation as a new node to the retrieved KG and employs an elaborate GNN to jointly update the LM and GNN representations via message passing.…”
Section: Related Workmentioning
confidence: 99%
“…However, knowledge-aware QA is more concerned with identifying which knowledge triplets can support the answer, while the node information can be replaced by serial numbers to distinguish between different entities (Galárraga et al 2013). As a result, the initial node embeddings are demonstrated to be dispensable, and some GNN layers are shown to be overparameterized (Jiang et al 2022;Wang et al 2022). Different from the existing node-based models, we use knowledge triplets as atomic knowledge for graph encoding and propose a triplet encoder for graph encoding.…”
Section: Relational Knowledge Feature Via Triplet-level Graph Encodermentioning
confidence: 99%
See 2 more Smart Citations