In a photo, motion blur can be used as an artistic style to convey motion and to direct attention. In panning or tracking shots, a moving object of interest is followed by the camera during a relatively long exposure. The goal is to get a blurred background while keeping the object sharp. Unfortunately, it can be difficult to impossible to precisely follow the object. Often, many attempts or specialized physical setups are needed.
This paper presents a novel approach to create such images. For capturing, the user is only required to take a casually recorded hand‐held video that roughly follows the object. Our algorithm then produces a single image which simulates a stabilized long time exposure. This is achieved by first warping all frames such that the object of interest is aligned to a reference frame. Then, optical flow based frame interpolation is used to reduce ghosting artifacts from temporal undersampling. Finally, the frames are averaged to create the result.
As our method avoids segmentation and requires little to no user interaction, even challenging sequences can be processed successfully. In addition, artistic control is available in a number of ways. The effect can also be applied to create videos with an exaggerated motion blur. Results are compared with previous methods and ground truth simulations. The effectiveness of our method is demonstrated by applying it to hundreds of datasets. The most interesting results are shown in the paper and in the supplemental material.
Existing visual surveillance systems typically require that human operators observe video streams from different cameras, which becomes infeasible if the number of observed cameras is ever increasing. In this paper, we present a new surveillance system that combines automatic video analysis (i.e., single person tracking and crowd analysis) and interactive visualization. Our novel visualization takes advantage of a high resolution display and given 3D information to focus the operator's attention to interesting/ critical areas of the observed area. This is realized by embedding the results of automatic scene analysis techniques into the visualization. By providing different visualization modes, the user can easily switch between the different modes and can select the mode which provides most information. The system is demonstrated for a real setup on a university campus
Figure 1: Light Paintings created with our approach. The production only took minutes, design and modifications are performed in real time.
AbstractLight painting is an artform where a light source is moved during a long-exposure shot, creating trails resembling a stroke on a canvas. It is very difficult to perform because the light source needs to be moved at the intended speed and along a precise trajectory. Additionally, images can be corrupted by the person moving the light. We propose computational light painting, which avoids such artifacts and is easy to use. Taking a video of the moving light as input, a virtual exposure allows us to draw the intended light positions in a post-process. We support animation, as well as 3D light sculpting, with high-quality results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.