2021 IEEE Virtual Reality and 3D User Interfaces (VR) 2021
DOI: 10.1109/vr50410.2021.00076
|View full text |Cite
|
Sign up to set email alerts
|

TapID: Rapid Touch Interaction in Virtual Reality using Wearable Sensing

Abstract: Figure 1: TapID is a wrist-worn device that detects taps on surfaces and identifies the tapping finger, which, combined with tracked hand poses, triggers input in VR. (a) The user is wearing two TapID bands for (b) touch interaction with surface widgets in VR, e.g., for text input, web browsing, or (c) document authoring using familiar front-end apps. (d) Widgets can also be registered to the body itself, using TapID to detect on-body taps and identify the tapping finger, here to rotate an image held in hand.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(9 citation statements)
references
References 44 publications
0
9
0
Order By: Relevance
“…The method first fits a 3D plane to the wall or tabletop surface and thresholds the distance between plane and the fingertip. TapID [40] uses subtle accelerations from a pair of IMU sensors on the wrist to detect contact between fingers and surfaces. TouchAnywhere [43] detects touch via heuristics by estimating the intersection between finger and its shadow, but only detects binary contact and does not include pressure ground truth.…”
Section: Related Workmentioning
confidence: 99%
“…The method first fits a 3D plane to the wall or tabletop surface and thresholds the distance between plane and the fingertip. TapID [40] uses subtle accelerations from a pair of IMU sensors on the wrist to detect contact between fingers and surfaces. TouchAnywhere [43] detects touch via heuristics by estimating the intersection between finger and its shadow, but only detects binary contact and does not include pressure ground truth.…”
Section: Related Workmentioning
confidence: 99%
“…The interaction between robots and controllers is a rather complex process that includes highly sensitive sensing, signal and data processing and the execution of control [43,49] . In particular, this process places high demands on response speed and execution accuracy.…”
Section: P-hmis For Robotic Controlmentioning
confidence: 99%
“…This review covers the recent progress made regarding smart flexible piezoelectric devices in the field of HMIs. Research concerning flexible piezoelectric electronics is comprehensively summarized and highlighted along with real-life P-HMIs, including robotic control, the Internet of Things (IoT), sports coaching, acoustic therapeutics and machine learning-enhanced P-HMIs [Figure 1] [27,[42][43][44][45][46] . Finally, the current challenges and prospects of P-HMIs are provided from our perspective, ranging from materials science and structural design to diverse potential applications in the near future.…”
Section: Introductionmentioning
confidence: 99%
“…Given a video sequence or an RGB image captured from a camera or mobile device, the task of markerless pose estimation is to predict the positions of the body keypoints (including joints and vertices) relative to a certain coordinate system [ 1 ]. As one most frequently used parts that human beings interact with the environment, hand pose estimation (HPE) is of great research interest and has numerous applications in areas such as robotics, virtual reality (VR) and augmented reality (AR), AI-aided diagnosis and smart human-computer interaction (HCI) systems [ 2 , 3 ]. Apart from those downstream applications, HPE also plays an important role in many basic upstream tasks, including gesture recognition [ 4 , 5 , 6 ] and sign language recognition (SLR) [ 7 , 8 ].…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, with the rapid development of hardware (e.g., Microsoft Kinect [ 9 ], Oak-D camera [ 10 ], wearable sensors) and advances in deep learning algorithms, HPE research has achieved considerable progress. The state-of-the-art approaches have achieved promising performance in a controlled environment with different data modalities, such as 2D images, 2D images with depth map [ 11 , 12 , 13 ], wearable gloves and sensors [ 3 , 14 ]. Among these modalities, since single-view 2D RGB images are much more available than sensors and depth images, HPE from a single RGB image can be widely used and easily deployed to various end devices.…”
Section: Introductionmentioning
confidence: 99%