Fig. 1: Screenshots of the left view showing correct occlusion of the user's hands and the virtual object (a 3D textured crate). The AR marker cannot be seen because it is directly behind the virtual object.Abstract-In this work we present a novel framework for the real-time interaction with 3D models in augmented virtual reality. Our framework incorporates view-dependent stereoscopic rendering of the reconstructed environment together with user's hands and a virtual object, and high-precision gesture recognition to manipulate it. Proposed setup consists of a Creative RGB-D camera, Oculus Rift VR head mounted display (HMD), Leap Motion hands and fingers tracker and an AR marker. The system is capable of augmenting the user's hands in relation to their point of view (POV) using the depth sensor mounted on the HMD, and allows manipulation of the environment through the Leap Motion sensor. The AR marker is used to determine the location of the Leap Motion sensor to help with consolidation of transformations between the Oculus and the Leap Motion sensor. Combined with accurate information from the Oculus HMD, the system is able to track the user's head and fingers, with 6-DOF, to provide a spatially accurate augmentation of the user's virtual hands. Such an approach allows us to achieve high level of user immersion since the augmented objects occlude the user's hands properly; something which is not possible with conventional AR. We hypothesize that users of our system will be able to perform better object manipulation tasks in this particular augmented VR setup as compared to virtual reality (VR) where user's hands are not visible, or if visible, always occlude virtual objects.
A fundamental open problem in SLAM is the effective representation of the map in unknown, ambiguous, complex, dynamic environments. Representing such environments in a suitable manner is a complex task. Existing approaches to SLAM use map representations that store individual features (range measurements, image patches, or higher level semantic features) and their locations in the environment. The choice of how we represent the map produces limitations which in many ways are unfavourable for application in real-world scenarios. In this paper, we explore a new approach to SLAM that redefines sensing and robot motion as acts of deformation of a differentiable surface. Distance fields and level set methods are utilized to define a parallel to the components of the SLAM estimation process and an algorithm is developed and demonstrated. The variational framework developed is capable of representing complex dynamic scenes and spatially varying uncertainty for sensor and robot models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.