2020
DOI: 10.48550/arxiv.2010.12873
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Contextualized Knowledge Structures for Commonsense Reasoning

Abstract: Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference. However, these methods rely on quality and contextualized knowledge structures (i.e., fact triples) that are retrieved at the pre-processing stage but overlook challenges caused by incompleteness of a KG, limit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…To study the effect of using KGs as external knowledge sources, we compare our method with vanilla fine-tuned LMs, which are knowledge-agnostic. We fine-tune RoBERTa- RGCN (Schlichtkrull et al, 2018) 72.7 (±0.2) 68.4 (±0.7) GconAttn 72.6 (±0.4) 68.6 (±1.0) KagNet (Lin et al, 2019) 73.5 (±0.2) 69.0 (±0.8) RN (Santoro et al, 2017) 74.6 (±0.9) 69.1 (±0.2) MHGRN (Feng et al, 2020) 74.5 (±0.1) 71.1 (±0.8) QA-GNN (Yasunaga et al, 2021) 76.5 (±0.2) 73.4 (±0.9) GREASELM (Ours) 78.5 (±0.5) 74.2 (±0.4) (Yan et al, 2020) 81.4 ≥355M AMR-SG (Xu et al, 2021) 81.6 ∼361M ALBERT + KPG (Wang et al, 2020) 81.8 ≥235M QA-GNN (Yasunaga et al, 2021) 82.8 ∼360M T5 * (Raffel et al, 2020) 83.2 ∼3B T5 + KB (Pirtoaca) 85.4 ≥11B UnifiedQA * (Khashabi et al, 2020) 87.2 ∼11B GREASELM (Ours) 84.8 ∼359M…”
Section: Baseline Methodsmentioning
confidence: 99%
“…To study the effect of using KGs as external knowledge sources, we compare our method with vanilla fine-tuned LMs, which are knowledge-agnostic. We fine-tune RoBERTa- RGCN (Schlichtkrull et al, 2018) 72.7 (±0.2) 68.4 (±0.7) GconAttn 72.6 (±0.4) 68.6 (±1.0) KagNet (Lin et al, 2019) 73.5 (±0.2) 69.0 (±0.8) RN (Santoro et al, 2017) 74.6 (±0.9) 69.1 (±0.2) MHGRN (Feng et al, 2020) 74.5 (±0.1) 71.1 (±0.8) QA-GNN (Yasunaga et al, 2021) 76.5 (±0.2) 73.4 (±0.9) GREASELM (Ours) 78.5 (±0.5) 74.2 (±0.4) (Yan et al, 2020) 81.4 ≥355M AMR-SG (Xu et al, 2021) 81.6 ∼361M ALBERT + KPG (Wang et al, 2020) 81.8 ≥235M QA-GNN (Yasunaga et al, 2021) 82.8 ∼360M T5 * (Raffel et al, 2020) 83.2 ∼3B T5 + KB (Pirtoaca) 85.4 ≥11B UnifiedQA * (Khashabi et al, 2020) 87.2 ∼11B GREASELM (Ours) 84.8 ∼359M…”
Section: Baseline Methodsmentioning
confidence: 99%
“…From Refs. [10,11], we know that such an approach can help improve commonsense reasoning performance. Considering Concept-Net knowledge about the problem, the model can read a dialogue with commonsense.…”
Section: Introductionmentioning
confidence: 99%