In this paper, we propose a new approach in order to improve the quality of microimages and display them onto an integral imaging monitor. Our main proposal is based on the stereo-hybrid 3D camera system. Originally, hybrid camera system has dissimilarity itself. We interpret our method in order to equalize the hybrid sensor's characteristics and 3D data modification strategy. We generate integral image by using synthetic back-projection mapping method. Finally, we project the integral image onto our proposed display system. We illustrate this procedure with some imaging experiments in order to prove an advantage of our approach.Key words: 3D display, Integral Imaging, 3D data registration, Color transfer, Point clouds, Hybrid 3D cameras Conventional photography is fully adapted to record a three-dimensional (3D) world's scene into a two-dimensional (2D) sensing device. Although 2D images can reflect the 3D nature of scenes, they still lack of important information. Fortunately, there are techniques that are able to record 3D information from 3D scenes. Among them, integral-imaging (InI) technique has been considered as one of the most potential technologies to record and display real world scenes. The main procedure of InI is performed by placing a microlens array in front of a 2D image sensing unit. This lens array records different perspectives of the 3D scene. This is because all of the reflected light from an object is transmitted into all the lenses, which distribute the light on different pixels of the 2D sensor depending on the incidence angle of the light. Here, we name as microimage which is recorded by any microlens. The whole array of microimages is named here as the integral image. When the integral picture is projected onto an InI display system, observers can see the 3D floating scene, which have full-parallax and quasicontinuous perspective views [1][2][3]. Many researchers and companies have applied this InI technique in many different fields [4][5][6][7][8][9][10][11][12].In the meantime, many kind of depth sensing techniques have been developed to record 3D scenes [13][14][15][16]. Among them, infrared (IR) light sensing has been widely used during the last decades. Especially the Kinect device from Microsoft uses IR lighting technology for depth acquisition. Nowadays, two different versions of Kinect are released. It is well known that both sensors have many different features for obtaining a depth map. The Kinect v1 uses a structured IR light pattern emitter and IR camera to measure the depth distance from captured features at the scene [13][14]. In comparison, the Kinect v2 utilizes timeof-flight (ToF) technology, which consists of emitting IR flashes with high frequency. The IR light is reflected on most 3D surfaces and detected by a depth sensor on the Kinect v2 device. The depth distance is measured by the light's returning duration. [15][16].