This article describes a novel approach to the real-time visualization of 3D imagery obtained from a 3D millimeter wave radar and the associated sensor fusion modules. The MMW radar system uses two scanning beams to provide all weather 3D distance measurements of objects appearing on the ground. This information is displayed using our highend 3D visualization engine capable of delivering models of up to 100,000 polygons with 30 frames per second. The resulting 3D models can then be viewed from any angle and subsequently processed to integrate match them against 3D model data stored in a synthetic database. Such systems can be installed in aerial vehicles in order to process and display information merged from multiple image sources including a high-resolution MMW 2D and 3D radar images, a stored terrain with 3D airport database, and near IR sensors. The resulting system provides safe all-time/all-visibility navigation in terrain-challenging areas and, in particular, real-time object detection during landing. This paper focuses on the real-time imaging and display aspects of our solution, and will discuss technical details of the radar design in the context of a practical application
This paper presents an algorithm for the glossy global illumination problem, which runs on the Graphics Processing Unit (GPU). In order to meet the architectural limitations of the GPU, we apply randomization in the iteration scheme. Randomization allows to use that set of the possible light interactions, which can be efficiently computed by the GPU, and makes it unnecessary to read back the result to the CPU. Instead of tessellating the surface geometry, the radiance is stored in texture space, and is updated in each iteration. The visibility problem is solved by hardware shadow mapping after hemicube projection. The shooter of the iteration step is selected by a custom mipmapping scheme, realizing approximate importance sampling. The variance is further reduced by partial analytic integration.
We describe our ongoing research on creating a Virtual Human Interface that employs photo-realistic virtual people and animated characters to provide digital media users with information, learning services and entertainment in a highly personalized manner. Our system was designed to be able to create emotional engagement between the virtual character and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. We developed innovative technologies for (i) photo-real facial modeling & animation, (ii) context dependent motion libraries with on-line retargeting, (iii) artificial emotions to modulate the characters' behavior and (iv) artificial vision to make the virtual human "aware" of its surroundings. The second key aspect of our solution is a simple to use high level content authoring process, comprising of video-based MPEG4 facial tracking and an innovative interface called the "Disc Controller", which allows users to create new actors, make them move and even direct them to achieve a final rendered output within minutes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.