New photobathymetry and water quality software is described here that utilizes subpixel analysis software (Subpixel Classifier) with an autonomous image calibration procedure and analytic retrieval algorithm to simultaneously retrieve and report bottom depth and the concentrations of suspended chlorophyll, suspended sediments, and colored dissolved organic carbon on a per-pixel basis from four-band multispectral image data. From the derived composition, the QSC2 (Quantitative Shoreline Characterization, Version 2.0) software also computes and reports water column visibility parameters (vertical and horizontal subsurface sighting ranges and turbidity, each at four wavelength band passes, plus Secchi depth as a scalar) as well as depth and turbidity confidence. QSC2 compensates for the effects of the atmosphere, sun and sky reflections from the water surface, subpixel contributions from exposed land, and variations in the bottom material properties. All information is derived automatically from the pixel data alone. The performance of the QSC2 software was demonstrated using a four-band Ikonos image of Plymouth, Massachusetts. Accuracies of the image-derived compositions, water clarity, and depths were assessed using field and laboratory measurements for eight representative lakes in the scene. The means of the differences of the field-measured and image-derived suspended chlorophyll and colored dissolved organic carbon concentrations for the eight lakes were 1.82 g/l and 4.34 mgC/l, respectively. The image-derived concentrations of suspended sediments were all below the threshold of detection for the field samples (5 mg/l), in agreement with the field data. The mean of the differences between field-measured and image-derived Secchi depths was 0.76 m. The mean depth difference was 0.57 m.
The theory of opponent-sensor image fusion is based on neural circuit models of adaptive contrast enhancement and opponent-color interaction, as developed and previously presented by Waxman, Fay et al. This approach can directly fuse 2, 3, 4, and 5 imaging sensors, e.g., VNIR, SWIR, MWIR, and LWIR for fused night vision. The opponent-sensor images also provide input to a point-and-click fast learning approach for target fingerprinting (pattern learning and salient feature discovery) and subsequent target search. We have recently developed a real-time implementation of multi-sensor image fusion and target learning & search on a single board attached processor for a laptop computer. In this paper we will review our approach to image fusion and target learning, and demonstrate fusion and target detection using an array of VNIR, SWIR and LWIR imagers. We will also show results from night data collections in the field. This opens the way to digital fused night vision goggles, weapon sights and turrets that fuse multiple sensors and learn to find targets designated by the operator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.