for high-quality meshes in terms of accuracy and it outperforms them in the case of low-quality scans where noises, holes and obscure parts are prevalent.
In this paper, we present a simple yet effective calibration method for multiple Kinects, i.e. a method that finds the rel ative position of RGB-depth cameras, as opposed to conven tional methods that find the relative position of RGB cameras. We first find the mapping function between the RGB camera and the depth camera mounted on one Kinect. With such a mapping function, we propose a scheme that is able to esti mate the 3D coordinates of the extracted corners from a stan dard calibration chessboard. To this end, we are able to build the 3D correspondences between two Kinects directly. This simplifies the calibration to a simple Least-Square Minimiza tion problem with very stable solution. Furthermore, by using two mirrored chessboard images on a thin board, we are able to calibrate two Kinects facing each other, something that is intractable using traditional calibration methods. We demon strate our proposed method with real data and show very accu rate calibration results, namely less than 7mm reconstruction error for objects at a distance of I.Sm, using around 7 frames for calibration.
We propose a key point-based approach, refers to as KPhub-PC, to estimate high-fidelity human body models from low-quality point clouds acquired with an affordable 3D scanner and a variation KPhub-I that can achieve the same purpose based on low-resolution single images taken by smartphones. In KPhub-PC, a sparse set of key points is annotated to guide the deformation of a parametric 3D human body model SMPL and then a high-fidelity human body model that can explain the target point cloud is built. Besides building 3D human body models from point clouds, KPhub-I is designed to estimate accurate 3D human body models from single 2D images. The SMPL model is fitted to 2D joints and the boundary of the human body which are detected using CNN based methods automatically. Considering that people are in stable poses most of the time, a stable pose prior is defined from CMU motion capture dataset for further improving accuracy. Extensive experiments demonstrate that in both types of user-generated data, the proposed approaches can build believable and animatable human body models robustly. Our approach outperforms the state-of-the-arts in the accuracy of both human body shape and pose estimation.
Traffic jam is a very common and very annoying thing in urban traffic. The most annoying part in traffic jams is not that you have to wait for a long time but that you do not even know how long you have to wait and what causes the traffic jam. However, the pain of being trapped in traffic jams seems to be neglected by existing research works; they put their focuses on either mathematical modeling or optimal routing for those not trapped in traffic jams. In this paper, we propose a traffic jam awareness and observation system using mobile phones. It can tell a driver how many vehicles ahead are trapped in traffic jam and how much time the driver would probably wait. Moreover, it can provide real-time video streams from the head vehicles of the traffic queue so that the driver can see what causes the traffic jam and the progress of handling the traffic jam. The system is environment independen; it can even work when the traffic jam happens in a tunnel. Experiments show that our system can find the head vehicles of the traffic queue and give the queue length accurately, and the video streams coming from the head vehicles reflect the actual situation of the traffic jam basically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.