Twelve adults experienced in using cellular telephones participated in an investigation of driving and performing a communication task. They navigated a closed serpentine driving course, requiring constant driving activity. Their communication task was responding to a verbal cognitive test battery administered by the passenger or via cellular telephone. The test battery consisted of sentence remembering with read-back and verbal puzzle solving. Baseline treatments were navigating the course without communication and responding to the test battery while parked. Subjects were prompted to report their current level of workload, using the Subjective Workload Assessment Technique (SWAT), throughout their tasks. Driving speeds were significantly lower when using the phone than they were with the passenger speaking, but the analysis did not reveal a difference in perceived workload between these conditions. Workload ratings were lower in the drive-only condition than they were when the driver used the phone.
The relationship of human target acquisition times and detection probabilities to electronically measured visual clutter was investigated. Ninety computer-generated scenes simulating infrared imagery and containing different levels of clutter and zero, one, two, or three targets were produced. Targets were embedded in these scenes counterbalancing for range and position. Global and local clutter were measured using both statistical variance and probability of edge metrics. Thirty-three aviators, tankers, and infantry soldiers were shown still-video images of the 90 scenes and were instructed to search for targets. Analyses indicate differences between the aviators and tankers in search times and types of errors. Results of multiple regression analyses of global clutter, local clutter, range, target dimension, target complexity, number of targets, and experience on search times are given and discussed in terms search strategies.
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. REPORT DATE (DD-MM-YYYY)December 2005 ARL-TR-3632 SPONSOR/MONITOR'S ACRONYM(S) 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) SPONSOR/MONITOR'S REPORT NUMBER(S) DISTRIBUTION/AVAILABILITY STATEMENTApproved for public release; distribution is unlimited. SUPPLEMENTARY NOTES ABSTRACTMission demands have made the robotics collaboration operator control unit (OCU) into a relatively dynamic, demanding, cognitively complex system where Soldiers must perform multiple tasks such as controlling multiple robots and processing large amounts of information in environments that sometimes contain high levels of noise. Research and modeling data indicate that audio display technologies would be very useful in OCU applications such as guiding visual display search. The purpose of this study was to examine the effectiveness of the integration of auditory display technologies in visual search tasks such as those that occur in robotic OCUs. Independent variables were audio signal mapping scheme, type of verbal positional cue, and visual target azimuth. Dependent variables were visual target search time and the National Aeronautics and Space Administration Task Load Index workload rating of the target search task. Participants were 36 students (15 males and 21 females) from Harford Community College. The results indicated that the use of auditory signal mapping and verbal positional cues significantly reduced visual display search time and workload and that positional cues mixed with specific audio mappings were the most efficient means of reducing search time. Specific design recommendations are made regarding the use of auditory signals in environments with narrow field-of-view visual displays. SUBJECT TERMSauditory assisted visual search; auditory displays SECURITY
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.