Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '92 1992
DOI: 10.1145/142750.142825
|View full text |Cite
|
Sign up to set email alerts
|

Interactive simulation in a multi-person virtual world

Abstract: A multi-user Virtual World has been implemented combining a flexible-object simulator with a multisensory user interface, including hand motion and gestures, speech input and output, sound output, and 3-D stereoscopic graphics with head-motion parallax, The implementation is based on a distributed clientherver architecture with a centralized Dialogue Manager. The simulator is inserted into the Virtual World as a server. A discipline for writing interaction dialogues provides a clear conceptual hierarchy and th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

1993
1993
2016
2016

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 58 publications
(18 citation statements)
references
References 17 publications
0
18
0
Order By: Relevance
“…One system, CUBRICON (Neal et al 1998, Neal and combined speech with simplistic pointing 'gestures' for referring to objects on a map display (while CUBRICON introduced a novel approach to human-map dialogue, only point indication of locations was supported, gesture interaction was not actually implemented, and only mouse input was used). Even for non-geospatial information, there have been relatively few attempts to develop integrated gesture/ speech interfaces, due to the challenges involved (Codella et al 1992, Fukumoto et al 1994, Koons and Sparell 1994, Vo and Waibel 1994, Chang 2000, Zue and Glass 2000, Corradini 2002, Wahlster 2002. In all implementations above, other than those by Sharma, electronic pen or data-glove-based gestures were used, resulting in tethered interaction through specialized devices rather than 'freehand' interaction directly with the information.…”
Section: Supporting Dialogue Through Natural-multimodal Interfacesmentioning
confidence: 99%
“…One system, CUBRICON (Neal et al 1998, Neal and combined speech with simplistic pointing 'gestures' for referring to objects on a map display (while CUBRICON introduced a novel approach to human-map dialogue, only point indication of locations was supported, gesture interaction was not actually implemented, and only mouse input was used). Even for non-geospatial information, there have been relatively few attempts to develop integrated gesture/ speech interfaces, due to the challenges involved (Codella et al 1992, Fukumoto et al 1994, Koons and Sparell 1994, Vo and Waibel 1994, Chang 2000, Zue and Glass 2000, Corradini 2002, Wahlster 2002. In all implementations above, other than those by Sharma, electronic pen or data-glove-based gestures were used, resulting in tethered interaction through specialized devices rather than 'freehand' interaction directly with the information.…”
Section: Supporting Dialogue Through Natural-multimodal Interfacesmentioning
confidence: 99%
“…Systems that utilize the early feature-level approach generally are based on multiple Hidden Markov Models or temporal neural networks [19,20] and the recognition process in one mode influences the course of recognition in the other. We use the semantic-level approach [9,12,21,22] that utilizes individual recognizers and a multimodal integration process. The individual recognizers can be trained using unimodal data, which are easier to collect and already publicly available for modalities like speech and handwriting.…”
Section: Strengths Of Speech Pen and Touch-tonementioning
confidence: 99%
“…Koved, Lewis, Ling and their colleagues at IBM have been using multiple workstations to support the real-time requirements of VR user interfaces 1,8 ]. Their VUE system assigns a workstation to each of the devices in their user interface, including a server process for each graphics renderer.…”
Section: Previous Workmentioning
confidence: 99%