Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.694
|View full text |Cite
|
Sign up to set email alerts
|

DualTKB: A Dual Learning Bridge between Text and Knowledge Base

Abstract: In this work, we present a dual learning approach for unsupervised text to path and path to text transfers in Commonsense Knowledge Bases (KBs). We investigate the impact of weak supervision by creating a weakly supervised dataset and show that even a slight amount of supervision can significantly improve the model performance and enable better-quality transfers. We examine different model architectures, and evaluation metrics, proposing a novel Commonsense KB completion metric tailored for generative models. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…For example, Lin et al (2020); Liu et al (2021) study CommonGen, which aims to generate coherent sentences containing the given common concepts. Dognin et al (2020); Agarwal et al (2021) study the data-to-text generation (Kukich, 1983), which aims to convert facts in KGs into natural language. None of these works meet the requirements on openness and interpretability.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Lin et al (2020); Liu et al (2021) study CommonGen, which aims to generate coherent sentences containing the given common concepts. Dognin et al (2020); Agarwal et al (2021) study the data-to-text generation (Kukich, 1983), which aims to convert facts in KGs into natural language. None of these works meet the requirements on openness and interpretability.…”
Section: Related Workmentioning
confidence: 99%
“…For the samples in D tr , which have no entity labels, we do not calculate their loss during entity generation, but instead calculate their loss in response generation ( §3.4) to realize end-to-end optimization like DualTKB (Dognin et al, 2020).…”
Section: Autoregressive Entity Generationmentioning
confidence: 99%
“…Recently, Dognin et al (2020); Guo et al (2020bGuo et al ( , 2021 proposed models trained to generate in both T2G and G2T directions, with consistency cycles created to enable the use of unsupervised datasets.…”
Section: Related Workmentioning
confidence: 99%