Despite advances in open-domain dialogue systems, automatic evaluation of such systems is still a challenging problem. Traditional reference-based metrics such as BLEU are ineffective because there could be many valid responses for a given context that share no common words with reference responses. A recent work proposed Referenced metric and Unreferenced metric Blended Evaluation Routine (RUBER) to combine a learning-based metric, which predicts relatedness between a generated response and a given query, with reference-based metric; it showed high correlation with human judgments. In this paper, we explore using contextualized word embeddings to compute more accurate relatedness scores, thus better evaluation metrics. Experiments show that our evaluation metrics outperform RUBER, which is trained on static embeddings.
User engagement is a critical metric for evaluating the quality of open-domain dialogue systems. Prior work has focused on conversation-level engagement by using heuristically constructed features such as the number of turns and the total time of the conversation. In this paper, we investigate the possibility and efficacy of estimating utterance-level engagement and define a novel metric, predictive engagement, for automatic evaluation of open-domain dialogue systems. Our experiments demonstrate that (1) human annotators have high agreement on assessing utterance-level engagement scores; (2) conversation-level engagement scores can be predicted from properly aggregated utterance-level engagement scores. Furthermore, we show that the utterance-level engagement scores can be learned from data. These scores can be incorporated into automatic evaluation metrics for open-domain dialogue systems to improve the correlation with human judgements. This suggests that predictive engagement can be used as a real-time feedback for training better dialogue models.
Human Written Story: jenny liked fresh fish. she decided to go fishing to catch her own. she brought her worms and pole and a chair. she sat there all day but didn't catch anything. she packed it up and went home disappointed. Sentence Manipulation: jenny liked fresh fish. she decided to go fishing to catch her own. she wrote songs every single day. she sat there all day but didn't catch anything. she packed it up and went home disappointed. Keyword Manipulation: jenny liked fresh fish. she decided to go fishing to catch her own. she brought her worms and pole and a chair. she sat there all day but didn't catch anything. she unpacked it up and went home disappointed. UNION: jenny liked fresh fish. jim has a very structured workout program to help him achieve goals. she brought her worms and pole and a relaxer. she sat there all day but didn't catch anything. she unpack it up and went home disappointed.Plot: jenny fresh fish -> decided Manipulated Plot: jenny fresh fish -> tasha fishing catch -> brought worms chair offered woman store -> brought worms chair -> -> sat -> packed home disappointed sat -> got wet packed home disappointed Manipulated Plot Guided Generation (Ours): jenny was out of fresh fish. tasha offered to buy her some from the woman at the store. she brought her worms and a chair and decided to play with them. jenny sat down and laid down on the chair. when she got wet, she packed up and went home disappointed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.