2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06)
DOI: 10.1109/cvprw.2006.217
|View full text |Cite
|
Sign up to set email alerts
|

wikiTable: finger driven interaction for collaborative knowledge-building workspaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…Letessier and Bérard [10] matched a circular template to binarized images in order to detect fingertips independently of the orientation of the hand. Baraldi et al [11] used the same method, and built a simple vision-based classifier that can discriminate between three postures, based on the number of stretched fingers. Wilson [12] developed a robust way to detect a pinch gesture, resulting in an effective way to interact with the computer.…”
Section: Related Workmentioning
confidence: 97%
“…Letessier and Bérard [10] matched a circular template to binarized images in order to detect fingertips independently of the orientation of the hand. Baraldi et al [11] used the same method, and built a simple vision-based classifier that can discriminate between three postures, based on the number of stretched fingers. Wilson [12] developed a robust way to detect a pinch gesture, resulting in an effective way to interact with the computer.…”
Section: Related Workmentioning
confidence: 97%
“…Interactive workspace featuring vision-based gesture recognition that allows multiple users to collaborate [2] in order to realize face-to-face contexts, designing a common workspace where users can build knowledge (activities like brainstorming or problem solving sessions), exploiting the useful scenario-specific characteristics.…”
Section: Knowledge Exploration and Buildingmentioning
confidence: 99%
“…The current TANGerINE system layout consists of a ceiling mounted case that embeds all of the required elements: computer, projector, camera and illuminator, targeting the horizontal surface of a normal table that is positioned under the case, where also the interface is visualized [9].…”
Section: Current Statusmentioning
confidence: 99%
“…Applications that deal with browsing and exploration of multimedia contents are usually based on large interactive surfaces, on which users can manipulate elements through direct and spontaneous actions. This research led to systems based on gesture recognition and analysis [8] of users bare hands [1] [9]. In the case of complex applications, featuring multiple options and actions, simple and spontaneous hand gestures turn out to be not enough.…”
Section: Introductionmentioning
confidence: 99%