2022
DOI: 10.3389/fcomp.2021.741148
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal User Feedback During Adaptive Robot-Human Presentations

Abstract: Feedback is an essential part of all communication, and agents communicating with humans must be able to both give and receive feedback in order to ensure mutual understanding. In this paper, we analyse multimodal feedback given by humans towards a robot that is presenting a piece of art in a shared environment, similar to a museum setting. The data analysed contains both video and audio recordings of 28 participants, and the data has been richly annotated both in terms of multimodal cues (speech, gaze, head g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 74 publications
0
3
0
Order By: Relevance
“…The robot presenter uses a knowledge graph to model the knowledge it is about to present, and then uses that same graph to keep track of the "grounding status" of the different pieces of information (Axelsson and Skantze, 2020). Multimodal feedback from the user (e.g., gaze, facial expressions, nods and backchannels) are interpreted as negative or positive, and the graph is updated accordingly, so that the presentation can be adapted to the user's level of knowledge and understanding (Axelsson and Skantze, 2022).…”
Section: Building Common Ground In Human-robot Interactionmentioning
confidence: 99%
“…The robot presenter uses a knowledge graph to model the knowledge it is about to present, and then uses that same graph to keep track of the "grounding status" of the different pieces of information (Axelsson and Skantze, 2020). Multimodal feedback from the user (e.g., gaze, facial expressions, nods and backchannels) are interpreted as negative or positive, and the graph is updated accordingly, so that the presentation can be adapted to the user's level of knowledge and understanding (Axelsson and Skantze, 2022).…”
Section: Building Common Ground In Human-robot Interactionmentioning
confidence: 99%
“…Recently, Axelsson and Skantze (2022) showed that individuals interacting with a system presenting a painting to them generally used feedback in the head and speech modalities, and that feedback in the facial, body pose and gaze modalities was not important for classifying their response as positive, negative, or neutral. These results give an indication that facial expressions are not important to sense for a presenting system of this kind, but may not be applicable in scenarios where the audience is more involved with the interaction than being the audience.…”
Section: Body Pose and Facial Expressionsmentioning
confidence: 99%
“…When listeners have difficulty hearing or understanding their interlocutor, they typically use a set of conversational strategies [1] to request a repetition [2], correction [3], or clarification of what was said [4,5]. These repair initiators can take different linguistic forms, ranging from short non-lexical sounds ("hmm?")…”
Section: Introductionmentioning
confidence: 99%