Abstract.Conventional local features such as SIFT or SURF are robust to scale and rotation changes but sensitive to large perspective change. Because perspective change always occurs when 3D object moves, using these features to estimate the pose of a 3D object is a challenging task. In this paper, we extend one of our previous works on viewpoint generative learning to 3D objects. Given a model of a textured object, we virtually generate several patterns of the model from different viewpoints and select stable keypoints from those patterns. Then our system learns a collection of feature descriptors from the stable keypoints. Finally, we are able to estimate the pose of a 3D object by using these robust features. In our experimental results, we demonstrate that our system is robust against large viewpoint change and even under partial occlusion.
In this paper, we present a system for visualizing temperature changes in a scene using an RGB-D camera coupled with a thermal camera. This system has applications in the context of maintenance of power equipments. We propose a two-stage approach made of with an offline and an online phases. During the first stage, after the calibration, we generate a 3D reconstruction of the scene with the color and the thermal data. We then apply the Viewpoint Generative Learning (VGL) method on the colored 3D model for creating a database of descriptors obtained from features robust to strong viewpoint changes. During the second online phase we compare the descriptors extracted from the current view against the ones in the database for estimating the pose of the camera. In this situation, we can display the current thermal data and compare it with the data saved during the offline phase.
In one hand, video games are dedicated to entertainment. In recent years, the emerging of consumers hardware dedicated to games induced great progress for realism and gameplay. Graphics rendering and physical engines, digital surround sound and new interaction interfaces are examples of areas which have benefited of these last improvements and widely contribute to the gaming experience. In another hand, virtual reality focus on user's presence which is its indubitable feeling of belonging to the virtual environment. As this goal is very hard to reach, studies have to focus on human through several research directions like immersion (3D vision, sound spatialization, haptic devices) and interaction which has to be as natural and non intrusive as possible. Recent researches on intersensoriality possibilities, metaphorical interactions or brain computer interfaces are examples of what would be achieved in immersion and interaction. At this point, we can argue that virtual reality can be a provider of new methods and resources for games. Unfortunately virtual reality room are expensive and difficult to deploy, what is probably the main reasons why virtual reality is still a laboratory experiment or confined to industrial simulator. Here is our double contribution : to combine video games and virtual reality through two different virtual reality game solutions and to design them with consumer grade components. This paper first presents a survey of both current video game evolutions and virtual reality researches. We will also give some examples of cross-benefits between video games and virtual reality. To illustrate this last point we will describe two virtual reality applications created by our research team and dedicated to gaming. Finally, as a prospective talk we will deal with three points : some recent virtual reality systems supposedly applicable to home gaming, some good points from DG that VR developers should incorporate in VR systems and last point, some lines of enquiry so that the union between VR and DG be at last consummate.
An efficient system that upsamples depth map captured by Microsoft Kinect while jointly reducing the effect of noise is presented. The upsampling is carried by detecting and exploiting the piecewise locally planar structures of the downsampled depth map, based on corresponding high-resolution RGB image. The amount of noise is reduced by accumulating the downsampled data simultaneously. By benefiting from massively parallel computing capability of modern commodity GPUs, the system is able to maintain high frame rate. Our system is observed to produce the upsampled depth map that is very close to the original depth map both visually and mathematically.
Abstract-Stereoscopic displays are becoming very popular since more and more contents are now available. As an extension, auto-stereoscopic screens allow several users to watch stereoscopic images without wearing any glasses. For the moment, synthetized content are the easiest solutions to provide, in realtime, all the multiple input images required by such kind of technology. However, live videos are a very important issue in some fields like augmented reality applications, but remain difficult to be applied on auto-stereoscopic displays. In this paper, we present a system based on a depth camera and a color camera that are combined to produce the multiple input images in realtime. The result of this approach can be easily used with any kind of auto-stereoscopic screen.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.