At work and in our personal life we often need to remember to perform intended actions at some point in the future, referred to as Prospective Memory. Individuals sometimes forget to perform intentions in safety-critical work contexts. Holding intentions can also interfere with ongoing tasks. We applied theories and methods from the experimental literature to test the effectiveness of external aids in reducing prospective memory error and costs to ongoing tasks in an air traffic control simulation. Participants were trained to accept and hand-off aircraft, and to detect aircraft conflicts. For the prospective memory task participants were required to substitute alternative actions for routine actions when accepting target aircraft. Across two experiments, external display aids were provided that presented the details of target aircraft and associated intended actions. We predicted that aids would only be effective if they provided information that was diagnostic of target occurrence and in this study we examined the utility of aids that directly cued participants when to allocate attention to the prospective memory task. When aids were set to flash when the prospective memory target aircraft needed to be accepted, prospective memory error and costs to ongoing tasks of aircraft acceptance and conflict detection were reduced. In contrast, aids that did not alert participants specifically when the target aircraft were present provided no advantage compared to when no aids we used. These findings have practical implications for the potential relative utility of automated external aids for occupations where individuals monitor multi-item dynamic displays.
Objective Examine the effects of decision risk and automation transparency on the accuracy and timeliness of operator decisions, automation verification rates, and subjective workload. Background Decision aids typically benefit performance, but can provide incorrect advice due to contextual factors, creating the potential for automation disuse or misuse. Decision aids can reduce an operator’s manual problem evaluation, and it can also be strategic for operators to minimize verifying automated advice in order to manage workload. Method Participants assigned the optimal unmanned vehicle to complete missions. A decision aid provided advice but was not always reliable. Two levels of decision aid transparency were manipulated between participants. The risk associated with each decision was manipulated using a financial incentive scheme. Participants could use a calculator to verify automated advice; however, this resulted in a financial penalty. Results For high- compared with low-risk decisions, participants were more likely to reject incorrect automated advice and were more likely to verify automation and reported higher workload. Increased transparency did not lead to more accurate decisions and did not impact workload but decreased automation verification and eliminated the increased decision time associated with high decision risk. Conclusion Increased automation transparency was beneficial in that it decreased automation verification and decreased decision time. The increased workload and automation verification for high-risk missions is not necessarily problematic given the improved automation correct rejection rate. Application The findings have potential application to the design of interfaces to improve human–automation teaming, and for anticipating the impact of decision risk on operator behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.