An experiment on adaptive automation is described. Reliability of automated fault diagnosis, mode of fault management (manual vs. automated), and fault dynamics affect variables including root mean square error, avoidance of accidents and false shutdowns, subjective trust in the system, and operator self-confidence. Results are discussed in relation to levels of automation, models of trust and self-confidence, and theories of human-machine function allocation. Trust in automation but not self-confidence was strongly affected by automation reliability. Operators controlled a continuous process with difficulty only while performing fault management but could prevent unnecessary shutdowns. Final authority for decisions and action must be allocated to automation in time-critical situations.
Function allocation is the design decision to determine which functions are to be performed by humans and which are to be performed by machines to achieve the required system goals, and it is closely related to the issue of automation. Some of the traditional strategies of function allocation include (a) assigning each function to the most capable agent (either human or machine), (b) allocating to machine every function that can be automated, and (c) finding an allocation scheme that ensures economical efficiency. However, such "who does what" decisions are not always appropriate from human factors viewpoints. This chapter clarifies why "who does what and when" considerations are necessary, and it explains the concept of adaptive automation in which the control of functions shifts between humans and machines dynamically, depending on environmental factors, operator workload, and performance. Who decides when the control of function must be shifted? That is one of the most crucial issues in adaptive automation. Letting the computer be in authority may conflict with the principle of human-centered automation which claims that the human must be maintained as the final authority over the automation. Qualitative discussions cannot solve the authority problem. This chapter proves the need for quantitative investigations with mathematical models, simulations, and experiments for a better understanding of the authority issue.Starting with the concept of function allocation, this chapter describes how the concept of adaptive automation was invented. The concept of levels of automation is used to explain interactions between humans and machines. Sharing and trading are distinguished to clarify the types of human-automation collaboration. Algorithms for implementing adaptive automation are categorized into three groups, and comparisons are made among them. Benefits and costs of adaptive automation, in relation to decision authority, trust-related issues, and human-interface design, are discussed with some examples.
The problem of complacency is analysed, and it is shown that previous research that claims to show its existence is defective, because the existence of complacency can not be proved unless optimal behaviour is speci® ed as a benchmark. Using gedanken experiments, it is further shown that, in general, not even with optimal monitoring can all signals be detected. Complacency is concerned with attention (monitoring, sampling), not with detection, and there is little evidence for complacent behaviour. To claim that behaviour is complacent is to blame the operator for failure to detect signals. This is undesirable, since so-called complacent behaviour may rather be the fault of poor systems design.
Driver drowsiness is a common cause of fatal traffic accidents. In this study, a driver assistance system with a dual control scheme is developed; it attempts to perform simultaneously the safety control of the vehicle and identification of the driver's state. The assistance system implements partial control in the event of lane departure and gives the driver the chance to voluntarily take the action needed. If the driver fails to implement the steering action needed within a limited time, the assistance system judges that "the driver's understanding of the given situation is incorrect", and executes the remaining control. We used a driving simulator equipped with the assistance system to investigate the effectiveness of identifying driver drowsiness and preventing lane departure accidents. Twenty students participated in three trials on a straight expressway, and they were required to implement only lateral control. We hypothesized that a participant cannot implement the action needed to maintain safety when he/she falls asleep, and that in such a case, the assistance system will implement the safety control repeatedly. The assistance system assisted the participants only when almost really needed, such as when their eyelids were closed. The results validated the hypothesis, showing that the assistance system implemented the safety control repeatedly when a participant fell asleep. In addition, the algorithms used by the assistance system to determine whether the driver can continue driving were evaluated through a leave-one-out cross-validation, and they were proven to be effective for identifying driver drowsiness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.