Experiments were designed to investigate the effects of target and distractor heterogeneity on the threshold for detection of a color target in a search task. In the first two experiments stimuli were chosen so that the target and distractor stimuli varied along one Cardinal axis in color space, while the target differed from distractors along another Cardinal axis. The Cardinal axis signaling the relevant target-distractor difference was consistent from trial to trial within an experiment. When observers searched for a color target among homogeneous distractors but the color of the target and distractors changed from trial to trial there was a small increase in threshold. When the distractors within a display were heterogeneous, and the target color varied from trial to trial there was a larger and more consistent increase in threshold. Varying stimuli along a Cardinal axis other than the Cardinal axis that differentiates target and distractors can impair performance in visual search tasks. Further experiments showed that the presence of heterogeneous distractors had little or no effect on thresholds when location or color cues indicated that these stimuli were irrelevant to the task. Results suggest that the effect of heterogeneity in these experiments is attentional in nature rather than sensory.
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. SPONSOR/MONITOR'S ACRONYM(S) HPW/RHCV SPONSORING/MONITORING AGENCY REPORT NUMBER AFRL-RH-WP-TR-2009-0015 DISTRIBUTION AVAILABILITY STATEMENTApproved for public release; distribution unlimited. SUPPLEMENTARY NOTES88 ABW PA Cleared 02/17/09; 88ABW-09-0522. ABSTRACTIt is believed that the fusion of multiple different images into a single image should be of great benefit to warfighters engaged in a search task. As such, more research has focused on the improvement of algorithms designed for image fusion. Many different fusion algorithms have already been developed; however, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research is to apply a visual performance-based assessment methodology to assess four algorithms that are specifically designed for fusion of multispectral digital images. The image fusion algorithms used included a Principle Component Analysis based algorithm, a Shift-invariant Wavelet transform algorithm, a Contrast-based algorithm, and pixel averaging. The methodology used has been developed to acquire objective human visual performance data as a means of evaluating the image fusion algorithms. Standard objective performance metrics (response time and error rate), were used to compare the fused images vs. two baseline conditions comprising each individual image used in the fused test images. Observers searched images for a military target hidden among foliage and then indicated in which quadrant of the screen the target was located using a spatial-forced-choice paradigm. Response time and percent correct were measured for each observer.
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. REPORT DATE (DD-MM-YYYY)2 Approved for public release; distribution is unlimited. SUPPLEMENTARY NOTESThis will be published in the Proceedings of the SPIE Defense and Security Symposium. The clearance number is AFRL/WS-06-0712, cleared 15 March 2006. 14. ABSTRACT While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to assess six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. The basic approach is to use standard objective performance metrics, such as response time and error rate, to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forced-choice paradigm. ABSTRACT While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to assess six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carratol and the multiscale Retinex algorithm described in R...
An earlier experiment using a yes-no procedure with a search accuracy task [A.L. Nagy, G. Thomas, Distractor heterogeneity, attention, and color in visual search tasks, Vision Research, 43 (2003) 1541-1552] showed that observers could combine information in different cardinal color mechanisms to facilitate search performance. In the experiments reported here we attempted to replicate these results with a forced-choice procedure and tested three different models of the manner in which information in different feature coding mechanisms is combined. One model was a linear summing model in which signals in different mechanisms are linearly summed in a mechanism under the control of attention. The summed signals are used to guide attention to likely targets. The second model was a nonlinear selection model in which signals in one mechanism are used to select stimuli for attention. A decision is then based on signals generated by the selected stimuli in a mechanism other than the one that is used for selection. The third model was the linear separability model, which suggests that the chromaticity of the target stimulus must be separated from the chromaticities of the distractor stimuli by a straight line in a chromaticity diagram for efficient search. Results favored the nonlinear selection model over the linear summing model and the linear separability model.
While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to develop a visual performance-based assessment methodology and apply it to assess three Retinex algorithms. The image enhancing algorithms used in this study are the two algorithms described in Funt, Ciurea, and McCann 1 as McCann99 Retinex and Frankle-McCann Retinex, and the multiscale Retinex with color restoration (MSRCR) 2 algorithm. This paper discusses the methodology developed to acquire objective human visual performance data as a means of evaluating various image enhancement algorithms. The basic approach is to determine whether or not standard objective performance metrics, such as response time and error rate, are improved when viewing the enhanced images versus the baseline, non-enhanced images. Four observers completed a visual search task using a spatial-forcedchoice paradigm. Observers had to search images for a target (a military vehicle) hidden among foliage and then indicate in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Future directions and the viability of this technique are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.