Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.338
|View full text |Cite
|
Sign up to set email alerts
|

Focused Attention Improves Document-Grounded Generation

Abstract: Document grounded generation is the task of using the information provided in a document to improve text generation. This work focuses on two different document grounded generation tasks: Wikipedia Update Generation task and Dialogue response generation. Our work introduces two novel adaptations of large scale pre-trained encoder-decoder models focusing on building context driven representation of the document and enabling specific attention to the information in the document. Additionally, we provide a strong… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(26 citation statements)
references
References 36 publications
(52 reference statements)
0
26
0
Order By: Relevance
“…What is more, Dziri et al (2022) observe that GPT2 not only replicates, but even amplifies hallucination around 20% when trained on WOW. This finding also extends to models that are designed explicitly to be knowledgegrounded (Prabhumoye et al, 2021;. Filtering noisy or high-error data (Zhang and Hashimoto, 2021) is also prone to failure as it may either break the cohesion of discourse or it may require excluding entire dialogues.…”
Section: Information Seekermentioning
confidence: 75%
See 1 more Smart Citation
“…What is more, Dziri et al (2022) observe that GPT2 not only replicates, but even amplifies hallucination around 20% when trained on WOW. This finding also extends to models that are designed explicitly to be knowledgegrounded (Prabhumoye et al, 2021;. Filtering noisy or high-error data (Zhang and Hashimoto, 2021) is also prone to failure as it may either break the cohesion of discourse or it may require excluding entire dialogues.…”
Section: Information Seekermentioning
confidence: 75%
“…Despite the recent success of knowledge-grounded neural conversational models (Thoppilan et al, 2022;Prabhumoye et al, 2021;Zhao et al, 2020, inter alia) in generating fluent responses, they also generate unverifiable or factually incorrect statements, a phenomenon known as hallucinations Broken heart is a term metaphor for the intense emotional and sometimes physical stress or pain one feels at experiencing great longing.…”
Section: Introductionmentioning
confidence: 99%
“…For evaluating both knowledge generation and response generation, we follow previous works Dinan et al, 2018;Prabhumoye et al, 2021) to evaluate the generated sentences against the reference sentences on averaged BLEU (an average of BLEU-1,2,3,4) (Papineni et al, 2002), ROUGE-L (Lin, 2004), METEOR (Denkowski and Lavie, 2011), and unigram F1. Additionally, we follow Komeili et al (2021) to use knowledge F1 (KF1) to evaluate the knowledgeability of the response generation.…”
Section: Automatic Evaluationmentioning
confidence: 99%
“…Grounding dialogue responses based on a knowledge base ensures a knowledgeable and engaging response and is emerging as an important step in research of human-machine conversation (Zhu et al, 2017;Ghazvininejad et al, 2018;Dinan et al, 2018;Zhou et al, 2018;Kim et al, 2019;Moon et al, 2019;Zhao et al, 2019;Li et al, 2020;Hedayatnia et al, 2020;Zhan et al, 2021;Prabhumoye et al, 2021;Rashkin et al, 2021;Komeili et al, 2021). Kim et al (2019) proposed sequential knowledge transformer to boost the knowledge selection quality from the candidates, and improved the performance of the response generation.…”
Section: Knowledge-grounded Dialoguesmentioning
confidence: 99%
See 1 more Smart Citation