To achieve an immersive natural 3D experience on a large screen, a 300-Mpixel multi-projection 3D display that has a 100-inch screen and a 40° viewing angle has been developed. To increase the number of rays emanating from each pixel to 300 in the horizontal direction, three hundred projectors were used. The projector configuration is an important issue in generating a high-quality 3D image, the luminance characteristics were analyzed and the design was optimized to minimize the variation in the brightness of projected images. The rows of the projector arrays were repeatedly changed according to a predetermined row interval and the projectors were arranged in an equi-angular pitch toward the constant central point. As a result, we acquired very smooth motion parallax images without discontinuity. There is no limit of viewing distance, so natural 3D images can be viewed from 2 m to over 20 m.
In this paper, we present an efficient Computer Generated Integral Imaging (CGII) method, called multiple ray cluster rendering (MRCR). Based on the MRCR, an interactive integral imaging system is realized, which provides accurate 3D image satisfying the changeable observers' positions in real time. The MRCR method can generate all the elemental image pixels within only one rendering pass by ray reorganization of multiple ray clusters and 3D content duplication. It is compatible with various graphic contents including mesh, point cloud, and medical data. Moreover, multi-sampling method is embedded in MRCR method for acquiring anti-aliased 3D image result. To our best knowledge, the MRCR method outperforms the existing CGII methods in both the speed performance and the display quality. Experimental results show that the proposed CGII method can achieve real-time computational speed for large-scale 3D data with about 50,000 points.
In this paper we present an autostereoscopic 3D display using a directional subpixel rendering algorithm in which clear left-right images are expressed in real time based on a viewer's 3D eye positions. In order to maintain the 3D image quality over a wide viewing range, we designed an optical layer that generates a uniformly distributed light field. The proposed 3D rendering method is simple, and each pixel processing can be performed independently in parallel computing environments. To prove the effectiveness of our display system, we implemented 31.5" 3D monitor and 10.1" 3D tablet prototypes in which the 3D rendering is processed in the GPU and FPGA board, respectively.
We explore the feasibility of implementing stereoscopy-based 3D images with an eye-tracking-based light-field display and actual head-up display optics for automotive applications. We translate the driver’s eye position into the virtual eyebox plane via a “light-weight” equation to replace the actual optics with an effective lens model, and we implement a light-field rendering algorithm using the model-processed eye-tracking data. Furthermore, our experimental results with a prototype closely match our ray-tracing simulations in terms of designed viewing conditions and low-crosstalk margin width. The prototype successfully delivers virtual images with a field of view of 10° × 5° and static crosstalk of <1.5%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.