A preliminary image quality measure which attempts to take into account the sensitivities of the human visual system (HVS) is described. The main sensitivities considered are the background illumination -level and spatial frequency sensitivities. Given a digitized image the algorithm produces, among several other figures of merit, a plot of the information content (IC) versus the resolution. The IC for a given resolution is defined here as the sum of the weighted spectral components at that resolution.The HVS normalization is done via first intensity -remapping the image by a monotone increasing funciton representing the background illumination -level sensitivity, followed by a spectral filtering via an HVSderived weighting function representing the spatial frequency sensitivity.The developed quality measure is conveniently parametereized and interactive.It allows experimentation with numerous parameters of the HVS model to determine the optimum set for which the highest correlation with subjective evaluations can be achieved.
A concept for processing of hyperspectral data is described which would make hyperspectral data from an operational system routinely available to customers. Customers not be required to be expert in spectral science. Customers would be offered data in a form readily useable by traditional image processing and Geographical Information Systems, with flexibility for application to their particular interests. This concept consists of an automated processing environment and a rigorous chain of algorithms to generate a variety of products, orderable by end users. The proposed processing chain can support many users, generate products within a few hours, provide repeatable information content, and enable users to focus their expertise on their area of interest and not on spectral analysis. Implementation of this concept would lead to a National standard for spectral data and products.
The term image quality can, unfortunately, apply to anything from a public relations firm's discussion to a comparison between corner drugstores' film processing. If we narrow the discussion to optical systems, we clarify the problem somewhat, but only slightly. We are still faced with a multitude of image quality measures -all different, and all couched in different terminology. Optical designers speak of MTF values, digital processors talk about summations of before and after image differences, pattern recognition engineers allude to correlation values, and radar imagers use side -lobe response values measured in decibels. Further complexity is introduced by terms such as information content, bandwidth, Strehl ratios, and, of course, limiting resolution. The problem is to compare these different yardsticks and try to establish some concrete ideas about evaluation of a final image. We need to establish the image attributes which are the most important to perception of the image in question and then begin to apply the different system parameters to those attributes. This special issue is an attempt to discuss these topics and bring together viewpoints from different fields, allowing some interaction between researchers using different concepts of image quality.The first paper, by I. Overington, is a review of much of his work over the past few years, which concerns the development of a visual model. In addition to covering his own work, the article also presents an excellent overview of research into the area of vision and the attributes of imagery important to perception.The next two articles deal with subjective ratings of imagery. H. Snyder, J. Burke, et al. discuss their work on preparation of a data base and imagery for subjective studies. They also report on subjective ratings of various image blurrings. The paper by R. Arguello, H. Kessler, and H. Seltner discusses subjective ratings of images produced from synthesized MTF shapes.Turning to imaging systems and methods, we have H. Edgerton's paper on techniques of shadow imaging, including method, application, and error sources. This paper is followed by H. Pollehn's discussion of current methods of evaluation and specification of image intensifiers.P. Peters presents a paper on his work to model an electro-optical imaging system from end -to -end in order to simulate image degradations arising from each element of the total chain. The paper by V. Kumar, D. Casasent, and H. Murakami represents an entirely different quality problem. Pattern recognition work depends on the strength of the recognition. Improvement in interpretation results in improved image quality. Perhaps other disciplines will make similar quality improvements simply through data processing rather than system modification.The last article concerns an especially interesting area, imaging with nonvisible wavelengths. R. Mitchel and S. Marder review the field of synthetic aperture radar imaging, outlining the important considerations, including system parameters and image degradations.This sp...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.