Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology 2010
DOI: 10.1145/1866029.1866073
|View full text |Cite
|
Sign up to set email alerts
|

Combining multiple depth cameras and projectors for interactions on, above and between surfaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
171
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 276 publications
(171 citation statements)
references
References 30 publications
0
171
0
Order By: Relevance
“…Projector [27], mSpaces [38], Chameleon [66], Pass-them-around [128], Peephole displays [230], dynamically defined information spaces [36], PenLight [188], MouseLight [189], Augmented Surfaces [177], PlayAywhere [225], Lightspace [226], Bonfire [105] and X-Large virtual workspaces [109].…”
Section: Categorizing Existing Designsmentioning
confidence: 99%
See 1 more Smart Citation
“…Projector [27], mSpaces [38], Chameleon [66], Pass-them-around [128], Peephole displays [230], dynamically defined information spaces [36], PenLight [188], MouseLight [189], Augmented Surfaces [177], PlayAywhere [225], Lightspace [226], Bonfire [105] and X-Large virtual workspaces [109].…”
Section: Categorizing Existing Designsmentioning
confidence: 99%
“…Surfaces can be detected dynamically to allow projection onto moving surfaces such as paper or people's hands [176,226]. Advanced prototype systems consisting of sensors and projectors have been developed to simultaneously map the environment and support projection-based interactions [147].…”
Section: Related Workmentioning
confidence: 99%
“…Researchers have also explored the possibilities of providing in-situ feedback and guidance using spatial augmented reality, which uses a combination of projectors and depth cameras and thus eliminates the need for users to wear additional apparel. For example, LightSpace [27] shows when users are tracked by the system by projecting colored higlights on the user's body. LightGuide [28] uses a projector and depth cameras to project visualizations on the user's body that provide movement guidance.…”
Section: In-situ Feedback and Guidancementioning
confidence: 99%
“…However, the problem of multi-depth camera room supervision for other applications is tackled by [16] who developed a multi-Kinect system for interaction with the environment and a past self, by [28] who are focusing on the "The room is the computer" and the "Body as display" approach as well as [8] who are tackling the problem of interference induced by using multiple structured light Primesense (Primesense, Tel-Aviv, Israel) devices. Other approaches like [17] and [29] focus on multi-Kinect people tracking and dynamic scene reconstruction from asynchronous depth cameras.…”
Section: Introductionmentioning
confidence: 99%