Video visualization is a computation process that extracts meaningful information from original video data sets and conveys the extracted information to users in appropriate visual representations. This paper presents a broad treatment of the subject, following a typical research pipeline involving concept formulation, system development, a path-finding user study, and a field trial with real application data. In particular, we have conducted a fundamental study on the visualization of motion events in videos. We have, for the first time, deployed flow visualization techniques in video visualization. We have compared the effectiveness of different abstract visual representations of videos. We have conducted a user study to examine whether users are able to learn to recognize visual signatures of motions, and to assist in the evaluation of different visualization techniques. We have applied our understanding and the developed techniques to a set of application video clips. Our study has demonstrated that video visualization is both technically feasible and cost-effective. It has provided the first set of evidence confirming that ordinary users can be accustomed to the visual features depicted in video visualizations, and can learn to recognize visual signatures of a variety of motion events.
One challenge in video processing is to detect actions and events, known or unknown, in video streams dynamically. This paper proposes a visualization solution, where a video stream is depicted as a series of snapshots at a relatively sparse interval, and detected actions are highlighted with continuous abstract illustrations. The combined imagery and illustrative visualization conveys multi-field information in a manner similar to electrocardiograms (ECG) and seismographs. We thus name this type of video visualization as VideoPerpetuoGram (VPG). In this paper, we describe a system that handles the aw and processed information of the video stream in a multi-field visualization pipeline. As examples, we consider the needs for highlighting several types of processed information, including detected actions in video streams, and estimated relationship between recognized objects. We examine the effective means for depicting multi-field information in VPG, and support our choice of visual mappings through a survey. Our GPU implementation facilitates the VPG-specific viewing specification through a sheared object space, as well as volume bricking and combinational rendering of volume data and glyphs.
Real-time performance for rendering multiple intersecting volumetric objects requires the speed and flexibility of modern GPUs. This requirement has restricted programming of the necessary shaders to GPU experts only. A visualization system that dynamically generates GPU shaders for multi-volume ray casting from a user-definable abstract render graph overcomes this limitation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.