2015
DOI: 10.1109/tvcg.2015.2391860
|View full text |Cite
|
Sign up to set email alerts
|

3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint

Abstract: In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
43
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 87 publications
(43 citation statements)
references
References 31 publications
0
43
0
Order By: Relevance
“…The basic hand-based processing steps were hand localization -in particular, hand detection, segmentation, and pose-estimationand hand gesture recognition. The recognition of specific hand gestures allowed providing inputs and commands to the system, in order to produce a specific action (e.g., selection of a virtual object by recognizing the clicking gesture [144]). The use of depth sensors has usually been preferred, since the localization of hands and objects can be more robust to illumination changes.…”
Section: Hci/ Hrimentioning
confidence: 99%
See 1 more Smart Citation
“…The basic hand-based processing steps were hand localization -in particular, hand detection, segmentation, and pose-estimationand hand gesture recognition. The recognition of specific hand gestures allowed providing inputs and commands to the system, in order to produce a specific action (e.g., selection of a virtual object by recognizing the clicking gesture [144]). The use of depth sensors has usually been preferred, since the localization of hands and objects can be more robust to illumination changes.…”
Section: Hci/ Hrimentioning
confidence: 99%
“…Some authors implemented multiple depth sensors [143]: one specific for short distances (i.e., up to 1 m) -to capture more accurate hand information -and a long-range depth camera to reproduce the correct proportions between the physical and virtual environment [143]. To improve the hand localization robustness, other systems combined multiple hand localization approaches, for example hand pose estimation in conjunction with fingertip detection [144]. This approach can be helpful when the objective is to localize the fingertips in situations with frequent self-occlusions.…”
Section: Hci/ Hrimentioning
confidence: 99%
“…It uses a one-to-one mapping method that is similar to the real world in a very intuitive way. The limitation of this method is that the user feels much difficulty to perceive depth in a virtual environment, moreover, the user can only select menu within the hand reach [2].…”
Section: Figure 1 Implied Metaphors For Graphic User Interface Designmentioning
confidence: 99%
“…Hand tracking, an actively studied problem in the field of computer vision nowadays, is employed to manipulate virtual objects in Virtual Reality (VR) and Augmented Reality (AR) appications such as those presented in [1,2]. Among many available means of interaction, the human hand is distinctive in that it functions as a natural and intuitive 3D interface through which various activities are performed.…”
Section: Introductionmentioning
confidence: 99%