Monocular plenoptic cameras are slightly modified, off-the-shelf cameras that have novel capabilities as they allow for truly passive, high-resolution range sensing through a single camera lens. Commercial plenoptic cameras, however, are presently delivering range data in non-metric units, which is a barrier to novel applications e.g. in the realm of robotics. In this work we revisit the calibration of focused plenoptic cameras and bring forward a novel approach that leverages traditional methods for camera calibration in order to deskill the calibration procedure and to increase accuracy. First, we detach the estimation of parameters related to either brightness images or depth data. Second, we present novel initialization methods for the parameters of the thin lens camera model-the only information required for calibration is now the size of the pixel element and the geometry of the calibration plate. The accuracy of the calibration results corroborates our belief that monocular plenoptic imaging is a disruptive technology that is capable of conquering new markets as well as traditional imaging domains.
Since 2010 the German Aerospace Center (DLR) is working on the project ATON (Autonomous Terrain-based Optical Navigation). Its objective is the development of technologies which allow autonomous navigation of spacecraft in orbit around and during landing on celestial bodies like the Moon, planets, asteroids and comets. The project developed different image processing techniques and optical navigation methods as well as sensor data fusion. The setup-which is applicable to many exploration missions-consists of an inertial measurement unit (IMU), a laser altimeter, a star tracker and one or multiple navigation cameras. In the past years, several milestones have been achieved. It started with the setup of a simulation environment including the detailed simulation of camera images. This was continued by hardware-in-the-loop tests in the Testbed for Robotic Optical Navigation where images were generated by real cameras in a simulated downscaled lunar landing scene. Data was recorded
Head‐mounted displays (HMDs) allow the visualization of virtual content and the change of view perspectives in a virtual reality (VR). Besides entertainment purposes, such displays also find application in augmented reality, VR training, or tele‐robotic systems. The quality of visual feedback plays a key role for the interaction performance in such setups. In the last years, high‐end computers and displays led to the reduction of simulator sickness regarding nausea symptoms, while new visualization technologies are required to further reduce oculomotor and disorientation symptoms. The so‐called vergence–accommodation conflict (VAC) in standard stereoscopic displays prevents intense use of 3D displays, so far. The VAC describes the visual mismatch between the projected stereoscopic 3D image and the optical distance to the HMD screen. This conflict can be solved by using displays with correct focal distance. The light‐field HMD of this study provides a close‐to‐continuous depth and high image resolution enabling a highly natural visualization. This paper presents the first user‐study on the visual comfort of light‐field displays with a close‐to‐market HMD based on complex interaction tasks. The results provide first evidence that the light‐field technology brings clear benefits to the user in terms of physical use comfort, workload, and depth matching performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.