The object-oriented visualization Toolkit (VTK) is widely used for scientific visualization. VTK is a visualization library that provides a large number of functions for presenting three-dimensional data. Interaction with the visualized data is controlled with two-dimensional input devices, such as mouse and keyboard. Support for real three-dimensional and multimodal input is non-existent. This paper describes VR-VTK: a multimodal interface to VTK on a virtual environment. Six degree of freedom input devices are used for spatial 3D interaction. They control the 3D widgets that are used to interact with the visualized data. Head tracking is used for camera control. Pedals are used for clutching. Speech input is used for application commands and system control. To address several problems specific for spatial 3D interaction, a number of additional features, such as more complex interaction methods and enhanced depth perception, are discussed. Furthermore, the need for multimodal input to support interaction with the visualization is shown. Two existing VTK applications are ported using VR-VTK to run in a desktop virtual reality system. Informal user experiences are presented.
Ray tracing algorithms that sample both the light received directly from light sources and the light received indirectly by diffuse reflection from other patches, can accurately render the global illumination in a scene and can display complex scenes with accurate shadowing. A drawback of these algorithms, however, is the high cost for sampling the direct light which is done by shadow ray testing. Although several strategies are available to reduce the number of shadow rays, still a large number of rays will be needed, in particular to sample large area light sources. An adaptive sampling strategy is proposed that reduces the number of shadow rays by using statistical information from the sampling process and by applying information from a radiosity preprocessing. A further reduction in shadow rays is obtained by applying shadow pattern coherence, i.e. reusing the adaptive sampling pattern for neighboring sampling points.
We demonstrate basic 2D and 3D interactions in both a Virtual Reality (VR) system, called the Personal Space Station, and an Augmented Reality (AR) system, called the Visual Interaction Platform. Since both platforms use identical (optical) tracking hardware and software, and can run identical applications, users can experience the effect of the way the systems present their information to them (as VR or AR). Since the systems use state-of-the-art tracking technology, the users can also experience the opportunities and limitations offered by such technology at first hand. Such hands-on experience is expected to enrich the discussion on the role that VR and AR systems (with optical tracking) could and/or should play within Ambient Intelligence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.