The goal of this paper is to show how to accomplish a more enjoyable and enthusiastic dialogue through the analysis of human-to-human conversational dialogues. We first created a conversational dialogue corpus annotated with two types of tags: one type indicates the particular aspects of the utterance itself, while the other indicates the degree of enthusiasm. We then investigated the relationship between these tags. Our results indicate that affective and cooperative utterances are significant to enthusiastic dialogue.
We have developed an active listening system for a conversation robot, specifically for reminiscing. The aim of the system is to contribute to the prevention of dementia in elderly persons and to reduce loneliness in seniors living alone. Based on the speech recognition results from a user's utterance, the proposed system produces backchannel feedback, repeats the user's utterance and asks information about predicates that were not included in the original utterance. Moreover, the system produces an appropriate empathic response by estimating the user's emotion from their utterances. One of the features of our system is that it can determine an appropriate response even if the speech recognition results contain some errors. Our results show that the conversations of 45.5% of the subjects (n = 110) with this robot continued for more than two minutes on the topic "memorable trip". The system response was deemed correct for about 77% of user utterances. Based on the results of a questionnaire, positive evaluations of the system were given by the elderly subjects.
SummaryIn this paper, we investigate distinctive utterances in non-task-oriented conversational dialogue through the comparison between task-oriented dialogue and non-task-oriented conversational dialogue. We then found that Indirect Responses (IRs) and Clarification Requests (CRs) are significant in non-taskoriented conversational dialogue. IRs are cooperative responses to other's question, while CRs are clarification questions. We analyzed the rhetorical relations about IRs and CRs. We then found that the IRs are generated by evidence and causal relations, while the CRs are generated by elaboration relation and causal relations.
One major challenge in machine learning applications is coping with mismatches between the datasets used in the development and those obtained in real-world applications. These mismatches may lead to inaccurate predictions and errors, resulting in poor product quality and unreliable systems. In this study, we propose StyleDiff to inform developers of the differences between the two datasets for the steady development of machine learning systems. Using disentangled image spaces obtained from recently proposed generative models, StyleDiff compares the two datasets by focusing on attributes in the images and provides an easy-to-understand analysis of the differences between the datasets. The proposed StyleDiff performs in O(dN log N ), where N is the size of the datasets and d is the number of attributes, enabling the application to large datasets. We demonstrate that StyleDiff accurately detects differences between datasets and presents them in an understandable format using, for example, driving scenes datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.