The challenge of detecting and tracking moving objects in imaging throughout the atmosphere stems from the atmospheric turbulence effects that cause time-varying image shifts and blur. These phenomena significantly increase the miss and false detection rates in long-range horizontal imaging. An efficient method was developed, which is based on novel criteria for objects' spatio-temporal properties, to discriminate true from false detections, following an adaptive thresholding procedure for foreground detection and an activity-based false alarm likeliness masking. The method is demonstrated on significantly distorted videos and compared with state of the art methods, and shows better false alarm and miss detection rates.
Surveillance in long-distance turbulence-degraded video is a difficult challenge because of the effects of the atmospheric turbulence that causes blur and random shifts in the image. As imaging distances increase, the degradation effects become more significant. This paper presents a method for surveillance in long-distance turbulence-degraded videos. This method is based on employing new criteria for discriminating true from false object detections. We employ an adaptive thresholding procedure for background subtraction, and implement new criteria for distinguishing true from false moving objects, that take into account the temporal consistency of both shape and motion properties. Results show successful detection also tracking of moving objects on challenging video sequences, which are significantly distorted with atmospheric turbulence. However, when the imaging distance is increased higher false alarms may occur. The method presented here is relatively efficient and has low complexity.
We present a novel approach for inspecting variable data prints (VDP) with an ultra-low false alarm rate (0.005%) and potential applicability to other real-world problems. The system is based on a comparison between two images: a reference image and an image captured by low-cost scanners. The comparison task is challenging as low-cost imaging systems create artifacts that may erroneously be classified as true (genuine) defects. To address this challenge we introduce two new fusion methods, for change detection applications, which are both fast and efficient. The first is an early fusion method that combines the two input images into a single pseudo-color image. The second, called Change-Detection Single Shot Detector (CD-SSD) leverages the SSD by fusing features in the middle of the network. We demonstrate the effectiveness of the proposed deep learning-based approach with a large dataset from real-world printing scenarios. Finally, we evaluate our models on a different domain of aerial imagery change detection (AICD). Our best method clearly outperforms the state-of-the-art baseline on this dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.