This paper presents an overview of our research project on digital preservation of cultural heritage objects and digital restoration of the original appearance of these objects. As an example of these objects, this project focuses on the preservation and restoration of the Great Buddhas. These are relatively large objects existing outdoors and providing various technical challenges. Geometric models of the great Buddhas are digitally achieved through a pipeline, consisting of acquiring data, aligning multiple range images, and merging these images. We have developed two alignment algorithms: a rapid simultaneous algorithm, based on graphics hardware, for quick data checking on site, and a parallel alignment algorithm, based on a PC cluster, for precise adjustment at the university. We have also designed a parallel voxel-based merging algorithm for connecting all aligned range images. On the geometric models created, we aligned texture images acquired from color cameras. We also developed two texture mapping methods. In an attempt to restore the original appearance of historical objects, we have synthesized several buildings and statues using scanned data and a literature survey with advice from experts.
Real-time occlusion handling is a major problem in outdoor mixed reality system because it requires great computational cost mainly due to the complexity of the scene. Using only segmentation, it is di cult to accurately render a virtual object occluded by complex objects such as trees, bushes etc. In this paper, we propose a novel occlusion handling method for real-time, outdoor, and omnidirectional mixed reality system using only the information from a monocular image sequence. We rst present a semantic segmentation scheme for predicting the amount of visibility for di erent type of objects in the scene. We also simultaneously calculate a foreground probability map using depth estimation derived from optical ow. Finally, we combine the segmentation result and the probability map to render the computer generated object and the real scene using a visibility-based rendering method. Our results show great improvement in handling occlusions compared to existing blending based methods.
Recent advances in sensing and software technologies enable us to obtain large-scale, yet fine 3D mesh models of cultural assets. However, such large models cannot be displayed interactively on consumer computers because of the performance limitation of the hardware. Cloud computing technology is a solution that can process a very large amount of information without adding to each client user's processing cost. In this paper, we propose an interactive rendering system for large 3D mesh models, stored on a remote environment through a network of relatively small capacity machines, based on the cloud computing concept. Our system uses both model-and image-based rendering methods for efficient load balance between a server and clients. On the server, the 3D models are rendered by the model-based method using a hierarchical data structure with Level of Detail (LOD). On the client, an arbitrary view is constructed by using a novel image-based method, referred to as the Grid-Lumigraph, which blends colors from sampling images received from the server. The resulting rendering system can efficiently render any image in real time. We implemented the system and evaluated the rendering and data transferring performance.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.