We propose to combine the Kinect and the Integral-Imaging technologies for the implementation of Integral Display. The Kinect device permits the determination, in real time, of (x,y,z) position of the observer relative to the monitor. Due to the active condition of its IR technology, the Kinect provides the observer position even in dark environments. On the other hand, SPOC 2.0 algorithm permits to calculate microimages adapted to the observer 3D position. The smart combination of these two concepts permits the implementation, for the first time we believe, of an Integral Display that provides the observer with color 3D images of real scenes that are viewed with full parallax and which are adapted dynamically to its 3D position. In InI, a 3D image is reconstructed using an InI monitor which, usually, is composed by a high resolution display and a microlens array. The 3D information can be displayed in the form of elemental images or microimages [5] but, on this work, we will only consider microimages. In order to display a good 3D image, each microimage needs to fit under one microlens. Finally, the light coming from the displayed microimages is integrated in the 3D space, reconstructing the 3D scene, see Figure 1.The position of an observer with respect the InI monitor is important because the area where the displayed 3D image can be observed correctly is limited. The viewing area and viewing angle of the InI monitor is mainly determined by the lens size, the number of lenses and the distance between the pixels and the lenses [10], see Figure 2. An observer looking at the InI monitor inside of the viewing area will see a 3D reconstructed image with full parallax. But, if the observer looks at the InI monitor outside of the viewing area, light coming from pixels of incorrect microimages will be seen, producing the undesired flipping effect, in which the observer will see an image with artifacts.