We studied the transparency of automated tools used during emergency operations in commercial aviation. Transparency (operationalized as increasing levels of explanation associated with an automated tool recommendation) was manipulated to evaluate how transparent interfaces influence pilot trust of an emergency landing planning aid. We conducted a low-fidelity study in which commercial pilots interacted with simulated recommendations from NASA’s Emergency Landing Planner (ELP) that varied in their associated levels of transparency. Results indicated that trust in the ELP was influenced by the level of transparency within the human–machine interface of the ELP. Design recommendations for automated systems are discussed.
This case study analyzes the factors that influence trust and acceptance among users (in this case, test pilots) of the Air Force’s Automatic Ground Collision Avoidance System. Our analyses revealed that test pilots’ trust depended on a number of factors, including the development of a nuisance-free algorithm, designing fly-up evasive maneuvers consistent with a pilot’s preferred behavior, and using training to assess, demonstrate, and verify the system’s reliability. These factors are consistent with the literature on trust in automation and could lead to best practices for automation design, testing, and acceptance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.