We present an algorithm for fusing data from a constellation of RF sensors detecting cellular emanations with the output of a multi-spectral video tracker to localize and track a target with a specific cell phone. The RF sensors measure the Doppler shift caused by the moving cellular emanation and then Doppler differentials between all sensor pairs are calculated. The multi-spectral video tracker uses a Gaussian mixture model to detect foreground targets and SIFT features to track targets through the video sequence. The data is fused by associating the Doppler differential from the RF sensors with the theoretical Doppler differential computed from the multi-spectral tracker output. The absolute difference and the root-mean-square difference are computed to associate the Doppler differentials from the two sensor systems. Performance of the algorithm was evaluated using synthetically generated datasets of an urban scene with multiple moving vehicles. The presented fusion algorithm correctly associates the cellular emanation with the corresponding video target for low measurement uncertainty and in the presence of favorable motion patterns. For nearly all objects the fusion algorithm has high confidence in associating the emanation with the correct multi-spectral target from the most probable background target.
Characterization of turbulence in the atmosphere and mitigation of its
effects in optical systems are important capabilities in both
commercial and military applications. We present an image processing
approach that jointly characterizes the magnitude of turbulence in the
atmosphere and mitigates the adverse effects imposed on optical
imaging systems. The magnitude of turbulence is measured indirectly
through a series of image frames in terms of the atmospheric coherence
length. We believe the results demonstrate the utility of the approach
on both simulated and experimental data.
A forward-looking and -moving ground-penetrating radar (GPR) acquires data that can be used for buried target detection. As the platform moves forward the sensor can acquire and form a sequence of images for a common spatial region. Due to the near-field nature of relevant collection scenarios, the point-spread function (PSF) varies significantly as a function of the spatial position, both within the scene and relative to the sensor platform. This variability of the PSF causes computational difficulties for matched-filter and related processing of the full video sequence. One approach to circumventing this difficulty is to coherently or incoherently integrate the video frames, and then perform detection processing on the integrated image. Here, averaging over the space-and motion-variant nature of the PSFs for each frame causes the PSF for the integrated image to appear less space-variant. Another alternative-and the one we investigate in this paper-is to transform each image from the conventional (range, cross-range) coordinate system to a (range, sine-angle) coordinate system in which the PSF is approximated as spatially invariant. The advantage of the (range, sine-angle) coordinate space is that methods that require space-invariance can be directly applied. Here we develop a multi-anodization approach, which results in a significantly improved image. To evaluate the relative advantages of this procedure, we will empirically measure the integrated side-lobe ratio, which represents the reduction in the side-lobes before and after applying the algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.