Proceedings of the 13th International Conference on Intelligent User Interfaces 2008
DOI: 10.1145/1378773.1378805
|View full text |Cite
|
Sign up to set email alerts
|

Beyond attention

Abstract: In a multimodal conversational interface supporting speech and deictic gesture, deictic gestures on the graphical display have been traditionally used to identify user attention, for example, through reference resolution. Since the context of the identified attention can potentially constrain the associated intention, our hypothesis is that deictic gestures can go beyond attention and apply to intention recognition. Driven by this assumption, this paper systematically investigates the role of deictic gestures … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…In the 30 years since, not much work has been done on how to accommodate the creation of new entities 5 (see (Wilson et al, 2016) for documents and (Li and Boyer, 2016) for tutoring dialogues about programming), and none in the visualization domain. Note we do not focus on multimodal reference resolution, another vast area (Navarretta, 2011;Qu and Chai, 2008;Eisenstein and Davis, 2006;Prasov and Chai, 2008;Iida et al, 2011;Kim et al, 2017;Sluÿters et al, 2022), even if we will briefly touch on deictic gestures in Section 3.…”
Section: Co-reference Resolutionmentioning
confidence: 99%
“…In the 30 years since, not much work has been done on how to accommodate the creation of new entities 5 (see (Wilson et al, 2016) for documents and (Li and Boyer, 2016) for tutoring dialogues about programming), and none in the visualization domain. Note we do not focus on multimodal reference resolution, another vast area (Navarretta, 2011;Qu and Chai, 2008;Eisenstein and Davis, 2006;Prasov and Chai, 2008;Iida et al, 2011;Kim et al, 2017;Sluÿters et al, 2022), even if we will briefly touch on deictic gestures in Section 3.…”
Section: Co-reference Resolutionmentioning
confidence: 99%