We present an automatic and robust technique for creating non-photorealistic rendering (NPR) and animation, starting from a video that depicts the shape details and follows the motion of underlying objects. We generate NPR from the initial frame of the source video using a greedy algorithm for stroke placements and models, in combination with a saliency map and a flow-guided difference-of-Gaussian filter. Our stroke model uses a set of triangles whose vertices are particles and whose edges are springs. Using a physicsbased framework, the generated and rendered strokes are translated, rotated and deformed by forces exerted from the sequential frames. External forces acting on strokes are calculated according to temporally and spatially smoothed per-pixel optical flow vectors. After simulating each frame, we delete unnecessary strokes and add new strokes for disappearing and appearing objects, but only if necessary to avoid popping and scintillation. Our framework automatically generates the coherent animation of rendered strokes, preserving the appearance details and animating strokes along with the underlying objects. This had been difficult to achieve with previous user-guided methods and automatic but limited transformations methods.
The main purpose of virtual reality (VR) is to enhance realism and the player experience. To do this, we focus on VR interaction design methods, analyze the existing interaction solutions including both accurate and rough interaction methods, and propose a new method for creating stable and realistic player interactions in a first-person shooter (FPS) game prototype. In this research, we design and modify the existing mapping methods between physical and virtual worlds, and create interfaces such that physical devices correspond to shooting tools in virtual reality. Moreover, we propose and design prototypes of universal interactions that can be implemented in a simple and straightforward way. Proposed interactions allow the player to perform actions similar to those of real shooting, using both hands such as firing, reloading, attaching and grabbing objects. In addition, we develop a gun template with haptic feedback, and a visual collision guide that can optionally be enabled. Then, we evaluate and compare our methods with the existing solutions. We then use these in a VR FPS game prototype and conduct a user study with participants, and the resulting user study proves that the proposed method is more stable, player-friendly and realistic. INDEX TERMS Virtual reality, player interfaces, human computer interaction, interaction design, first-person shooting game. KYOUNGJU PARK received the B.E. degree in computer engineering from Ewha Woman's University, in 1997, and the M.S. and Ph.D. degrees in computer and information science from the University of Pennsylvania, in 2000 and 2005, respectively. After receiving her Ph.D., she was with Rutgers University, as a Research Professor, and with Samsung Electronics, as a Senior Engineer. In 2007, she joined Chung-Ang University, Seoul, South Korea, as a Faculty Member. Her research interests include virtual reality, and computer graphics and interaction.
With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.