Tourists and amateur photographers are often hindered in capturing their cherished images/videos by a fence/occlusion that limits accessibility to the scene of interest. The situation has been exacerbated by growing concerns of security at public places and a need exists to provide a tool that can be used for post-processing such "fenced videos" to produce a "defenced" image. There are several challenges in this problem and in this work, we identify them as 1. Robust detection of the fence/occlusions. 2. Estimating pixel motion of background scene. 3. Filling in the fence/occlusions by utilizing information in multiple frames of the input video. We use a video captured by a camera panning the scene containing a fence and obtain a "de-fenced" image. Our method can effectively remove fences from images as demonstrated for several synthetic and real-world cases.Index Terms-Image de-fencing, inpainting, belief propagation, Markov random field. BACKGROUNDIn recent times, security concerns have led to extra precautions at popular public places and monuments such as fences and barricades. For the tourist, who wishes to capture his memories in an image/video at his favourite landmark, this poses a hindrance which spoils the captured data. It would be so much nice if a post-processing tool existed that can efficiently rid the input video of occlusion artifacts. It is common for the user to pan the camera while capturing a video of the scene in order to cover the entire landscape. A sample frame from a captured video is shown in Fig. 1 (a) wherein the fence is occluding parts of the face and body. We observe that the motion cue in video can be exploited to perform "de-fencing" of the degraded frames to obtain an image wherein the fence has been removed. In Fig. 1 (c), we show a sample output of the proposed algorithm which has successfully removed the occlusions due to fence pixels.There has been considerable progress in the area of image inpainting [3,4,5,6,7] in which most works assume that theFig. 1. Image de-fencing: (a) A frame from the video captured by panning the person occluded by a fence. (b) Estimating the global relative motion of background pixels by matching corresponding points using affine SIFT descriptor [1]. (c) De-fenced image obtained by the proposed algorithm. (d) A result from [2]. (e) Corresponding output of our technique.
The advent of inexpensive smartphones/tablets/phablets equipped with cameras has resulted in the average person capturing cherished moments as images/videos and sharing them on the internet. However, at several locations, an amateur photographer may be frustrated with the captured images. For example, the object of interest to the photographer might be occluded or fenced. Currently available image de-fencing methods in the literature are limited by non-robust fence detection and can handle only static occluded scenes whose video is captured by constrained camera motion. In this work, we propose an algorithm to obtain a de-fenced image using a few frames from a video of the occluded static or dynamic scene. We also present a new fenced image database captured under challenging scenarios such as clutter, poor lighting, viewpoint distortion, etc. Initially, we propose a supervised learning-based approach to detect fence pixels and validate its performance with qualitative as well as quantitative results. We rely on the idea that freehand panning of the fenced scene is likely to render visible hidden pixels of the reference frame in other frames of the captured video. Our approach necessitates the solution of three problems: (i) detection of spatial locations of fences/occlusions in the frames of the video, (ii) estimation of relative motion between the observations, and (iii) data fusion to fill in occluded pixels in the reference image. We assume the de-fenced image as a Markov random field and obtain its maximum a posteriori estimate by solving the corresponding inverse problem. Several experiments on synthetic and real-world data demonstrate the effectiveness of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.