Every holographic video display is built on a spatial light modulator, which directs light by diffraction to form points in three-dimensional space. The modulators currently used for holographic video displays are challenging to use for several reasons: they have relatively low bandwidth, high cost, low diffraction angle, poor scalability, and the presence of quantization noise, unwanted diffractive orders and zero-order light. Here we present modulators for holographic video displays based on anisotropic leaky-mode couplers, which have the potential to address all of these challenges. These modulators can be fabricated simply, monolithically and at low cost. Additionally, these modulators are capable of new functionalities, such as wavelength division multiplexing for colour display. We demonstrate three enabling properties of particular interest-polarization rotation, enlarged angular diffraction, and frequency domain colour filtering-and suggest that this technology can be used as a platform for low-cost, high-performance holographic video displays.
Virtual environments (VEs) allow safe, repeatable, and controlled evaluations of obstacle avoidance and navigation performance of people with visual impairments using visual aids. Proper simulation of mobility in a VE requires an interface, which allows subjects to set their walking pace. Using conventional treadmills, the subject can change their walking speed by pushing the tread with their feet, while leveraging handrails or ropes (self-propelled mode). We developed a feedback-controlled locomotion interface that allows the VE workstation to control the speed of the treadmill, based on the position of the user. The position and speed information is also used to implement automated safety measures, so that the treadmill can be halted in case of erratic behavior. We compared the feedback-controlled mode to the self-propelled mode by using speed-matching tasks (follow a moving object or match the speed of an independently moving scene) to measure the efficacy of each mode in maintaining constant subject position, subject control of the treadmill, and subject pulse rates. Additionally, we measured the perception of speed in the VE on each mode. The feedbackcontrolled mode required less physical exertion than self-propelled. The average position of subjects on the feedback-controlled treadmill was always within a centimeter of the desired position. There was a smaller standard deviation in subject position when using the self-propelled mode than when using the feedback-controlled mode, but the difference averaged less than six centimeters across all subjects walking at a constant speed. Although all subjects underestimated the speed of an independently moving scene at higher speeds, their estimates were more accurate when using the feedback-controlled treadmill than the self-propelled.
Abstract. We introduce reconfigurable image projection ͑RIP͒ holograms and a method for computing RIP holograms of three-dimensional ͑3-D͒ scenes. RIP holograms project one or more series of parallax views of a 3-D scene through one or more holographically reconstructed projection surfaces. Projection surfaces are defined at locations at which the hologram reconstructs a variable number of real or virtual images, called holographic primitives, which collectively compose the surface and constitute exit pupils for the view pixel information. RIP holograms are efficiently assembled by combining a sweep of 2-D parallax views of a scene with instances of one or more precomputed diffractive elements, which are permitted to overlap on the hologram, and which reconstruct the holographic primitives. The technique improves on the image quality of conventional stereograms while affording similar efficient computation: it incorporates realistic computer graphic rendering or high-quality optical capture of a scene, it eliminates some artifacts often present in conventional computed stereograms, and its basic multiply-and-accumulate operations are suitable for hardware implementation. The RIP approach offers flexible tuning of capture and projection together, according to the sampling requirements of the scene and the constraints of a given display architecture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.