We present novel designs for virtual and augmented reality near-eye displays based on phase-only holographic projection. Our approach is built on the principles of Fresnel holography and double phase amplitude encoding with additional hardware, phase correction factors, and spatial light modulator encodings to achieve full color, high contrast and low noise holograms with high resolution and true per-pixel focal control. We provide a GPU-accelerated implementation of all holographic computation that integrates with the standard graphics pipeline and enables real-time (≥90 Hz) calculation directly or through eye tracked approximations. A unified focus, aberration correction, and vision correction model, along with a user calibration process, accounts for any optical defects between the light source and retina. We use this optical correction ability not only to fix minor aberrations but to enable truly compact, eyeglasses-like displays with wide fields of view (80°) that would be inaccessible through conventional means. All functionality is evaluated across a series of hardware prototypes; we discuss remaining challenges to incorporate all features into a single device.
We present a display device which solves a long-standing problem: to give a true stereoscopic view of simulated objects, without artifacts, to a single unencumbered observer, while allowing the observer to freely change position and head rotation.Based on a novel combination of temporal and spatial multiplexing, this technique will enable artifact-free stereo to become a standard feature of display screens, without requiring the use of special eyewear. The availability of this technology may significantly impact CAD and CHI applications, as well as entertainment graphics. The underlying algorithms and system architecture are described, as well as hardware and software aspects of the implementation.
Near-eye displays using holographic projection are emerging as an exciting display approach for virtual and augmented reality at high-resolution without complex optical setups --- shifting optical complexity to computation. While precise phase modulation hardware is becoming available, phase retrieval algorithms are still in their infancy, and holographic display approaches resort to heuristic encoding methods or iterative methods relying on various relaxations. In this work, we depart from such existing approximations and solve the phase retrieval problem for a hologram of a scene at a single depth at a given time by revisiting complex Wirtinger derivatives, also extending our framework to render 3D volumetric scenes. Using Wirtinger derivatives allows us to pose the phase retrieval problem as a quadratic problem which can be minimized with first-order optimization methods. The proposed Wirtinger Holography is flexible and facilitates the use of different loss functions, including learned perceptual losses parametrized by deep neural networks, as well as stochastic optimization methods. We validate this framework by demonstrating holographic reconstructions with an order of magnitude lower error, both in simulation and on an experimental hardware prototype.
We present an electro-optical apparatus capable of displaying a computer generated hologram (CGH) in real time. The CGH is calculated by a supercomputer, read from a fast frame buffer, and transmitted to a high-bandwidth acousto-optic modulator (AOM). Coherent light is modulated by the AOM and optically processed to produce a three-dimensional image with horizontal parallax.
The NYU Media Research Laboratory has developed a single-person, non-invasive, active autostereoscopic display with no mechanically moving parts that provides a realistic stereoscopic image over a large continuous viewing area and range of distance [Perlin]. We believe this to be the first such display in existence. The display uses eye tracking to determine the pitch and placement of a dynamic parallax barrier, but rather than using the even/odd interlace found in other parallax barrier systems, the NYU system uses wide vertical stripes both in the barrier structure and in the interlaced image. The system rapidly cycles through three different positional phases for every frame so that the stripes of the individual phases are not perceived by the user. By this combination of temporal and spatial multiplexing, we are able to deliver full screen resolution to each eye of an observer at any position within an angular volume of 20 degrees horizontally and vertically and over a distance range of 0.3-1.5 meters. We include a discussion of recent hardware and software improvements made in the second generation of the display. Hardware improvements have increased contrast, reduced flicker, improved eye tracking, and allowed the incorporation of OpenGL acceleration. Software improvements have increased frame rate, reduced latency and visual artifacts, and improved the robustness and accuracy of calibration. New directions for research are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.