Augmented Reality (AR) applications demand realistic rendering of virtual content in a variety of environments, so they require an accurate description of the 3-D scene. In most case AR system is equipped with Time-of-Flight (ToF) cameras to provide real-time scene depth maps, but they have problems that affect the quality of depth data, which ultimately makes them difficult to use for AR. Such defects appear because of poor lighting, specular or fine-grained surfaces of objects. As a result, the effect of increasing the boundaries of objects appears, and the overlapping of objects makes it impossible to distinguish one object from another. The article presents an approach based on a modified algorithm for searching for similar blocks using the concept of anisotropic gradient. A proposed modified exemplar block-based algorithm uses the autoencoder-learned local image descriptor for image inpainting, that extract the features of images, and the depth image by a decoding network. The encoder consists of a convolutional layer and a dense block, which also consists of convolutional layers. We also show the application for the proposed vision system using depth inpainting for virtual content reconstruction in augmented reality. Analysis of the results of the study shows that the proposed method allows you to correctly restore the boundaries of objects on the image of the depth map. Our system quantitatively outperforms state-of-the-art methods in terms of reconstruction accuracy in the real and simulated benchmark datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.