2006
DOI: 10.1007/11941354_28
|View full text |Cite
|
Sign up to set email alerts
|

An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

Abstract: This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and padd… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
26
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 51 publications
(28 citation statements)
references
References 25 publications
2
26
0
Order By: Relevance
“…The final system allowed a user to pick and place virtual furniture in an AR scene using a combination of paddle gestures and speech commands. Irawati et al conducted a pilot user study on the benefits of multimodal interaction [17]. However, their system did not support natural free hand input and users had to memorize or refer a list of commands to interact with virtual objects.…”
Section: Related Workmentioning
confidence: 99%
“…The final system allowed a user to pick and place virtual furniture in an AR scene using a combination of paddle gestures and speech commands. Irawati et al conducted a pilot user study on the benefits of multimodal interaction [17]. However, their system did not support natural free hand input and users had to memorize or refer a list of commands to interact with virtual objects.…”
Section: Related Workmentioning
confidence: 99%
“…Irawati et al [13] developed a computer vision based AR system with multimodal input, allowing a user to pick and place virtual furniture in an AR scene using a combination of paddle gestures and speech commands. In the evaluation study they found that multimodal input enabled subjects to complete a task faster than with gesture alone.…”
Section: Multimodal Gesture and Speech Interfaces In Armentioning
confidence: 99%
“…The user wore a data glove, head mounted display and viewpoint tracking equipment. Irawati et al [6] developed a computer vision based AR systems with multimodal input, allowing a user to pick and place virtual furniture in an AR scene using a combination of paddle gestures and speech commands. In the evaluation study they found that multimodal input enabled subjects to complete a task faster than with gesture alone.…”
Section: Related Workmentioning
confidence: 99%