2009
DOI: 10.1007/978-3-642-04617-9_68
|View full text |Cite
|
Sign up to set email alerts
|

Formalizing Joint Attention in Cooperative Interaction with a Virtual Human

Abstract: Abstract. Crucial for action coordination of cooperating agents, joint attention concerns the alignment of attention to a target as a consequence of attending to each other's attentional states. We describe a formal model which specifies the conditions and cognitive processes leading to the establishment of joint attention. This model provides a theoretical framework for cooperative interaction with a virtual human and is specified in an extended belief-desire-intention modal logic. keywordscooperative agents,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…Hoffmann et al (2009) state that this sense of social presence frequently leads participants to interact with virtual guides, similarly to how they engage with real humans, which is consistent with the Media Equation theory (Reeves and Nass, 1996;Hoffmann et al, 2009). To further enhance the social presence, it is, moreover, crucial for virtual guides to align their behavior with user expectations (Ibáñez-Martínez et al, 2008), incorporating factors such as gazing (Martinez et al, 2010;Pfeiffer-Lessmann et al, 2012;Pejsa et al, 2015) and gestures (Jerald, 2015). Additionally, natural user-agent interaction can be enhanced by including linguistic style adjustments as found in research by, e.g., de Jong et al (2008) in terms of politeness and formality, or by Ibanez et al (2003a) and Ibanez et al (2003b) in terms of the audience and the VA's background and character.…”
Section: Characteristicmentioning
confidence: 61%
“…Hoffmann et al (2009) state that this sense of social presence frequently leads participants to interact with virtual guides, similarly to how they engage with real humans, which is consistent with the Media Equation theory (Reeves and Nass, 1996;Hoffmann et al, 2009). To further enhance the social presence, it is, moreover, crucial for virtual guides to align their behavior with user expectations (Ibáñez-Martínez et al, 2008), incorporating factors such as gazing (Martinez et al, 2010;Pfeiffer-Lessmann et al, 2012;Pejsa et al, 2015) and gestures (Jerald, 2015). Additionally, natural user-agent interaction can be enhanced by including linguistic style adjustments as found in research by, e.g., de Jong et al (2008) in terms of politeness and formality, or by Ibanez et al (2003a) and Ibanez et al (2003b) in terms of the audience and the VA's background and character.…”
Section: Characteristicmentioning
confidence: 61%
“…In this respect, the present framework could be a valuable tool to define respective JA situations, their affordances, and determine the respective probabilistic and temporal parameters during real-life human-agent interaction. This is an essential step for the construction of realistic artificial agents for any kind of system for human interaction (e.g., Pfeiffer-Leßmann and Wachsmuth, 2009;Yu et al, 2010Yu et al, , 2012Grynszpan et al, 2012;Stephenson et al, 2018;Willemse et al, 2018).…”
Section: Naturalistic Human-agent Interactionmentioning
confidence: 99%
“…He makes use of information about a human's gaze and pointing gestures to assess their focus of attention. In doing so, Max is able to establish joint attention with the human communicative partner, and increase fluidity of the interaction (see Pfeiffer--Leßmann, & Wachsmuth, 2009;Wachsmuth, 2008). Information about the human partner's attention, in combination with emotion simulation, intention recognition, and the ability to give feedback in conversation (Becker--Asano & Wachsmuth, 2010;Wachsmuth, 2008), make the experience of interacting with Max seem virtually real.…”
Section: Gazementioning
confidence: 99%