Many 3D scenes (e.g. generated from CAD data) are composed of a multitude of objects that are nested in each other. A showroom, for instance, may contain multiple cars and every car has a gearbox with many gearwheels located inside. Because the objects occlude each other, only few are visible from outside. We present a new technique, Spherical Visibility Sampling (SVS), for real-time 3D rendering of such -possibly highly complex -scenes. SVS exploits the occlusion and annotates hierarchically structured objects with directional visibility information in a preprocessing step. For different directions, the directional visibility encodes which objects of a scene's region are visible from the outside of the regions' enclosing bounding sphere. Since there is no need to store a separate view space subdivision as in most techniques based on preprocessed visibility, a small memory footprint is achieved. Using the directional visibility information for an interactive walkthrough, the potentially visible objects can be retrieved very efficiently without the need for further visibility tests. Our evaluation shows that using SVS allows to preprocess complex 3D scenes fast and to visualize them in real time (e.g. a Power Plant model and five animated Boeing 777 models with billions of triangles). Because SVS does not require hardware support for occlusion culling during rendering, it is even applicable for rendering large scenes on mobile devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.