In daylight viewing conditions, image contrast is often significantly degraded by atmospheric aerosols such as haze and fog. This paper introduces a method for reducing this degradation in situations in which the scene geometry is known. Contrast is lost because light is scattered toward the sensor by the aerosol particles and because the light reflected by the terrain is attenuated by the aerosol. This degradation is approximately characterized by a simple, physically based model with three parameters. The method involves two steps: first, an inverse problem is solved in order to recover the three model parameters; then, for each pixel, the relative contributions of scattered and reflected flux are estimated. The estimated scatter contribution is simply subtracted from the pixel value and the remainder is scaled to compensate for aerosol attenuation. This paper describes the image processing algorithm and presents an analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image. This analysis shows that the SNR decreases exponentially with range. A temporal filter structure is proposed to solve this problem. Results are presented for two image sequences taken from an airborne camera in hazy conditions and one sequence in clear conditions. A satisfactory agreement between the model and the experimental data is shown for the haze conditions. A significant improvement in image quality is demonstrated when using the contrast enhancement algorithm in conjuction with a temporal filter.
A field project that includes surface observations, remote sensing, and forecast models provides a better understanding of fog-induced low visibility and improves the parameterization of fog microphysics.
Degradation of images by the atmosphere is a familiar problem. For example, when terrain is imaged from a forward-looking airborne camera, the atmosphere degradation causes a loss in both contrast and color information. Enhancement of such images is a difficult task because of the complexity in restoring both the luminance and the chrominance while maintaining good color fidelity. One particular problem is the fact that the level of contrast loss depends strongly on wavelength. A novel method is presented for the enhancement of color images. This method is based on the underlying physics of the degradation process, and the parameters required for enhancement are estimated from the image itself.
This paper is concerned with the mitigation of simple contrast loss due to added lightness in an image. This added lightness has been referred to as "airlight" in the literature since it is often caused by optical scattering due to fog or mist. A statistical model for scene content is formulated that gives a way of detecting the presence of airlight in an arbitrary image. An algorithm is described for estimating the level of this airlight given the assumption that it is constant throughout the image. This algorithm is based on finding the minimum of a global cost function and is applicable to both monochrome and color images. The method is robust and insensitive to scaling. Once an estimate of airlight is achieved, then image correction is straightforward. The performance of the algorithm is explored using the Monte Carlo simulation with synthetic images under different statistical assumptions. Several examples of before and after color images are given. Results with real video data obtained in poor visibility conditions indicate frame-to-frame consistency of better than 1% of maximum level.
There is no standard method for classifying eye fixations. Thresholds for speed, acceleration, duration, and stability of point of gaze have each been employed to demarcate data, but they have no commonly accepted values. Here, some general distributional properties of eye movements were used to construct a simple method for classifying fixations, without parametric assumptions or expert judgment. The method was primarily speed-based, but the required optimum speed threshold was derived automatically from individual data for each observer and stimulus with the aid of Tibshirani, Walther, and Hastie's 'gap statistic'. An optimum duration threshold, also derived automatically from individual data, was used to eliminate the effects of instrumental noise. The method was tested on data recorded from a video eye-tracker sampling at 250 frames a second while experimental observers viewed static natural scenes in over 30,000 one-second trials. The resulting classifications were compared with those by three independent expert visual classifiers, with 88-94% agreement, and also against two existing parametric methods. Robustness to instrumental noise and sampling rate were verified in separate simulations. The method was applied to the recorded data to illustrate the variation of mean fixation duration and saccade amplitude across observers and scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.