OmniPhoto-360° VR photograph Figure 1: OmniPhotos are 360 • VR photographs with motion parallax that can be casually captured in a single 360 • video sweep. Capturing takes 3-10 seconds and, once processed into an image-based scene representation, OmniPhotos can be viewed freely in consumer VR headsets. Please note that our figures are animated to best convey our results; please view with Adobe Reader.
Virtual reality headsets are becoming increasingly popular, yet it remains difficult for casual users to capture immersive 360° VR panoramas. State-of-the-art approaches require capture times of usually far more than a minute and are often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for quickly and casually capturing high-quality 360° panoramas with motion parallax. Our approach requires a single sweep with a consumer 360° video camera as input, which takes less than 3 seconds to capture with a rotating selfie stick or 10 seconds handheld. This is the fastest capture time for any VR photography approach supporting motion parallax by an order of magnitude. We improve the visual rendering quality of our OmniPhotos by alleviating vertical distortion using a novel deformable proxy geometry, which we fit to a sparse 3D reconstruction of captured scenes. In addition, the 360° input views significantly expand the available viewing area, and thus the range of motion, compared to previous approaches. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes. We will make our code available.
Figure 1: We capture a video around a target subject (the Egyptian bust) and we re-enact the target's face in novel viewpoints. Our re-enactment is driven by an expression sequence of a source subject captured using a custom app running on an iPhone.
Figure 1: We create a 6-DoF VR experience from a single omnidirectional stereo (ODS) pair of a scene. Our approach takes an ODS panorama as input (1), along with the radius of the viewing circle. We determine disparities between the left and right eye views using optical flow (2). The disparities (3) are used to obtain depth per pixel (4) creating a pointcloud (5) used to generate a DASP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.