Hundreds of articles and presentations have addressed problems and concerns with flight deck automation. Many have raised general concerns about the approaches taken to automation use philosophies and automation design. Others have addressed specific problems or concerns identified with particular designs or implementations of automation.The first phase of our research identified 114 human factors problems and concerns with flight deck automation (Funk, Lyall, & Riley, 1995). In the second phase of our project we used a wide variety of sources to locate and record evidence related to these problems and concerns. Because an issue is " [a] point of discussion, debate, or dispute ..." (Morris, 1969), we decided to change our terminology and refer to these problems and concerns as flight deck automation issues. This is what will be used for the remainder of this paper.The sources we reviewed for evidence included accident reports, documents describing incident report studies, and documents describing scientific experiments, surveys and other studies. We also conducted a survey of individuals with broad expertise related to human factors and flight deck automation (human factors scientists, aviation safety professionals, pilots, and others), and evaluated a related set of ASRS incident reports. We reviewed these sources for data and other objective information related to the issues. For each instance of evidence we qualitatively assessed the extent to which it supported one side of the issue or the other, and assigned a numeric strength rating between -5 and +5. We assigned a positive strength rating to evidence supporting that side of the issue suggested by its issue statement (supportive evidence) and a negative strength rating to evidence supporting the other side (contradictory evidence).For example, consider the statement of issue065: Pilots may lose psychomotor and cognitive skills required for flying manually, or for flying non-automated aircraft, due to extensive use of automation. If we found evidence in a source indicating that pilots lose manual flying skills due to extensive use of automation (at least under some circumstances), we recorded the related excerpt from the source document and assigned this supportive evidence a positive rating, perhaps as great as +5. If we found evidence in a source indicating that pilots can and do maintain manual proficiency even with extensive use of automation (at least under some circumstances), we recorded the related excerpt and assigned this contradictory evidence a negative rating, perhaps as great as -5.We developed detailed strength assignment guidelines for evidence from each type of information source. For example, in pilot surveys of automation issues, if at least 90 per cent of the respondents were in agreement with a statement consistent with an issue statement, we assigned a strength rating of +5. If at least 90 per cent were reported as disagreeing with a statement consistent with an issue statement, we assigned a strength rating of -5. During the process...
Research on automation bias and other heuristics and biases suggests that something about the nature or display of information in the automated environment may be encouraging non-rational cognitive processing and hindering the maintenance of coherence, or rationality and consistency in diagnostic and judgment processes, makirig operators susceptible to coherence errors. The purpose of this study was to track pilot diagnosis and decision making strategies for different types of problems as a function of three operational variables: a) sourceautomated or otherof the initial indication of a problem; b) congruence vs inconsistency of available information; and c) time pressure. Pilots responded to a series of scenarios on an interactive website by accessing relevant information until they could make a diagnosis and come to a decision about what to do. Pilots who were under time pressure took less time to come to a diagnosis, checked fewer pieces of information, and performed fewer double-checks of information. Diagnoses were more accurate when pilots experienced no time pressure, and diagnosis accuracy was significantly higher when information was congruent than when it was conflicting. Implications for training and automation use are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.