Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in 'thin air' that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and 'wrap-around' displays.
Optical trap displays (OTD) are an emerging display technology with the ability to create full-color images in air. Like all volumetric displays, OTDs lack the ability to show virtual images. However, in this paper we show that it is possible to instead simulate virtual images by employing a time-varying perspective projection backdrop.
The binary fluidization of Geldart-D type non-spherical wood particles and spherical LDPE particles was investigated in a laboratory-scale bed. The experiment was performed for varying static bed height, wood particles count, as well as superficial gas velocity. The LDPE velocity field were quantified using Particle Image Velocimetry (PIV). The wood particles orientation and velocity are measured using Particle Tracking Velocimetry (PTV). A machine learning pixel-wise classification model was trained and applied to acquire wood and LDPE particle masks for PIV and PTV processing, respectively. The results show significant differences in the fluidization behavior between LDPE only case and binary fluidization case. The effects of wood particles on the slugging frequency, mean, and variation of bed height, and characteristics of the particle velocities/orientations were quantified and compared. This comprehensive experimental dataset serves as a benchmark for validating numerical models.
The distribution of noise in coded aperture images is known to depend in a complex manner upon the encoding technique, the decoding technique and upon the object distribution. We have examined the S/N characteristics of a classs of, planar, pseudorandom, time -modulated coded apertures in order to optimize the aperture design for a defined object distribution.Relative standard deviation (RSD) in the reconstructed image is studied both theoretically and by computer simulation.Results are shown for uniform, planar source distributions of varying size as a function of mean code plate transmission and aperture hole spacing.In each case, effects of solid angle and finite geometry are taken into account. For simplicity, image reconstruction is accomplished by backprojection.For a source size equal to 20% of a full field flood, a code of 12% mean transmission gives a near optimum S /N.The RSD with this code for an on -axis image element is equal to .33 of that for a single scanning pinhole covering an identical field of view. Even for a 100% field flood an optimum code exists which has a mean transmission of nearly 4 %. The RSD in this case is smaller compared to the scanning pinhole by a factor of .85.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.