2019
DOI: 10.1145/3237189
|View full text |Cite
|
Sign up to set email alerts
|

Miscommunication Detection and Recovery in Situated Human–Robot Dialogue

Abstract: Even without speech recognition errors, robots may face difficulties interpreting natural-language instructions. We present a method for robustly handling miscommunication between people and robots in task-oriented spoken dialogue. This capability is implemented in TeamTalk, a conversational interface to robots that supports detection and recovery from the situated grounding problems of referential ambiguity and impossible actions. We introduce a representation that detects these problems and a nearest-neighbo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 94 publications
0
14
0
Order By: Relevance
“…As interactions with conversational agents are becoming increasingly common, it is more likely that people will encounter failures with these systems. It is therefore important to investigate how people's behaviours are affected when system failures cause misunderstandings [10,57]. Researchers have however reported mixed results in the effects of robot failure on people's behaviour and perception of the robot.…”
Section: Robot Failuresmentioning
confidence: 99%
“…As interactions with conversational agents are becoming increasingly common, it is more likely that people will encounter failures with these systems. It is therefore important to investigate how people's behaviours are affected when system failures cause misunderstandings [10,57]. Researchers have however reported mixed results in the effects of robot failure on people's behaviour and perception of the robot.…”
Section: Robot Failuresmentioning
confidence: 99%
“…Finally, improving the conversation and understanding the subtle emotions in the communication between humans and assistants (Portela & Granell‐Canut, 2017) is a primary research line regardless of the target application domain. In this context, Hamzei et al (2020) explore structural patterns of questions related to places and their human‐generated responses, and Marge and Rudnicky (2019) investigate conversational interfaces between a human operator giving navigational instructions to a team of robots.…”
Section: Resultsmentioning
confidence: 99%
“…For instance, users appear to pay less attention when selecting example objects, clicking Shake Head, implicitly labeling all shown objects as negative examples, even when positive ones exist in the set. It may help to infer user truthfulness during dialog (Vinanzi, Patacchiola, Chella, & Cangelosi, 2019) as an auxiliary goal, or to use user-specific dialog policies (Doering et al, 2019) to develop rapport (Marge & Rudnicky, 2019).…”
Section: Future Workmentioning
confidence: 99%