Objective: A series of experiments examined human operators' strategies for interacting with highly (93%) reliable automated decision aids in a binary signal detection task.Background: Operators often interact with automated decision aids in a suboptimal way, achieving performance levels lower than predicted by a statistically ideal model of information integration. To better understand operators' inefficient use of decision aids, the current study compared participants' automation-aided performance levels to the predictions of seven statistical models of collaborative decision making.Method: Participants performed a binary signal detection task that asked them to classify random dot images as either blue-or orange-dominant. They made their judgments either unaided or with assistance from a 93%-reliable automated decision aid that provided either graded (Experiments 1 and 3) or binary (Experiment 2) cues. Analysis compared automation-aided performance to the predictions of seven statistical models of collaborative decision making, including a statistically optimal model (Sorkin & Dai, 1994) and Robinson and Sorkin's (1985) contingent criterion model.
Results and conclusion:Automation-aided sensitivity hewed closest to the predictions of the two least efficient collaborative models, well short of statistically ideal levels. Performance was similar whether the aid provided graded or binary judgments. Model comparisons identified potential strategies by which participants integrated their judgments with the aid's.
Application:Results lend insight into participants' automation-aided decision strategies, and provide benchmarks for predicting automation-aided performance levels.Keywords: human-automation interaction, signal detection theory, decision-making strategies, contingent criterion model BENCHMARKING AIDED DECISIONS 2 Benchmarking Aided Decision Making in a Signal Detection Task Human operators in everyday and professional contexts work with the assistance of automated decision aids. The assisted tasks often take the form of binary signal detection judgments, which ask a decision maker to classify potentially ambiguous states of the world into either of two discrete categories (Green & Swets, 1966;Macmillan & Creelman, 2005).A credibility assessment aid, for instance, might help organizational decision makers distinguish deceptive from honest responses when questioning interviewees in negotiations or investigations (Jensen, Lowry, & Jenkins, 2011). Analogously, a combat identification system might help soldiers distinguish friends from foes on the battlefield (Wang, Jamieson, & Hollands, 2009). Ideally, assistance from an automated aid will help the human operator to achieve higher levels of sensitivity, the ability to distinguish between states of the world. But like the human operator, an automated decision aid performing a signal detection task is typically required to render judgments based on incomplete or uncertain data. The aid's sensitivity will therefore be imperfect, just as the human operato...