Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-short.74
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Enrichment of Persona-grounded Dialog with Background Stories

Abstract: Humans often refer to personal narratives, life experiences, and events to make a conversation more engaging and rich. While personagrounded dialog models are able to generate responses that follow a given persona, they often miss out on stating detailed experiences or events related to a persona, often leaving conversations shallow and dull. In this work, we equip dialog models with 'background stories' related to a persona by leveraging fictional narratives from existing story datasets (e.g. ROC-Stories). Si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…We measure the informativeness of generated explanations using token, sentence, and corpus level evaluations. Concretely, We evaluate the models using Distinct-1 and Distinct-2 (D-1, D-2) scores (Li et al 2016), Unique Sentence Ratio (USR) as proposed by (Li, Zhang, and Chen 2020) and ENTR (Jhamtani et al 2018) following previous work on diversifying generated content (Majumder et al 2021).…”
Section: Automatic Evaluationmentioning
confidence: 99%
“…We measure the informativeness of generated explanations using token, sentence, and corpus level evaluations. Concretely, We evaluate the models using Distinct-1 and Distinct-2 (D-1, D-2) scores (Li et al 2016), Unique Sentence Ratio (USR) as proposed by (Li, Zhang, and Chen 2020) and ENTR (Jhamtani et al 2018) following previous work on diversifying generated content (Majumder et al 2021).…”
Section: Automatic Evaluationmentioning
confidence: 99%
“…1, where predefined personas of an interlocutor are given by several sentences. However, existing models tend to reply with generic responses that still lack consistency and informative details (Song et al 2019;Zhao et al 2020;Majumder et al 2021;Xu et al 2020). One of the main underlying causes is inappropriate persona selection, since conventional approaches often neglect the inherent persona transitions that exist in the conversation flow.…”
Section: Chatbot Persona Collectionmentioning
confidence: 99%
“…Second, predefined personas are mostly short and merely superficial descriptions of personal attributes (Majumder et al 2021;Xu et al 2020), which brings great difficulties for machines to understand the real personality traits (e.g., "introverted") of interlocutors with artificially generated persona sentences (e.g., "I do not like to talk"). For lack of an in-depth understanding of persona traits, existing models tend to copy the original description from the given persona when generating responses (Kim et al 2022), making conversations less engaging, as part of the generated response trivially overlaps with the given persona sentence.…”
Section: Ground-truth Responsementioning
confidence: 99%
See 1 more Smart Citation
“…However, would it be wonderful if we map the human-like personality and make your chatbot exhibits more relatable and engaging to users (Fernau et al 2022)? Zheng et al (2020) incorporate character profiles on Persona-Chat (Zhang et al 2018), while Majumder et al (2021) further leverage the background stories at inference time to equip personalised information into the language model. However, injecting simple character profiles or personalised information is insufficient to make language model-based chatbot feel more human (Chaves and Gerosa 2021).…”
Section: Introductionmentioning
confidence: 99%