Objective:The authors evaluated the validity of trust in automation and information technology (IT) suspicion by examining their factor structure and relationship with decision confidence.Background: Research on trust has burgeoned, yet the dimensionality of trust remains elusive. Researchers suggest that trust is a unidimensional construct, whereas others believe it is multidimensional. Additionally, novel constructs, such as IT suspicion, have yet to be distinguished from trust in automation. Research is needed to examine the overlap between these constructs and to determine the dimensionality of trust in automation.Method: Participants (N = 72) engaged in a computerbased convoy scenario involving an automated decision aid. The aid fused real-time sensor data and provided route recommendations to participants who selected a route based on (a) a map with historical enemy information, (b) sensor inputs, and (c) automation suggestions. Measures for trust in automation and IT suspicion were administered after individuals interacted with the automation.Results: Results indicated three orthogonal factors: trust, distrust, and IT suspicion. Each variable was explored as a predictor of decision confidence. Distrust and trust evidenced unique influences on decision confidence, albeit at different times. Higher distrust related to less confidence, whereas trust related to greater confidence. Conclusion:The current study found that trust in automation was best characterized by two orthogonal dimensions (trust and distrust). Both trust and distrust were found to be independent from IT suspicion, and both distrust and trust uniquely predicted decision confidence.Application: Researchers may consider using separate measures for trust and distrust in future studies.
Objective:The current study examined humanhuman reliance during a computer-based scenario where participants interacted with a human aid and an automated tool simultaneously.Background: Reliance on others is complex, and few studies have examined human-human reliance in the context of automation. Past research found that humans are biased in their perceived utility of automated tools such that they view them as more accurate than humans. Prior reviews have postulated differences in human-human versus humanmachine reliance, yet few studies have examined such reliance when individuals are presented with divergent information from different sources.Method: Participants (N = 40) engaged in the Convoy Leader experiment. They selected a convoy route based on explicit guidance from a human aid and information from an automated map. Subjective and behavioral human-human reliance indices were assessed. Perceptions of risk were manipulated by creating three scenarios (low, moderate, and high) that varied in the amount of vulnerability (i.e., potential for attack) associated with the convoy routes.Results: Results indicated that participants reduced their behavioral reliance on the human aid when faced with higher risk decisions (suggesting increased reliance on the automation); however, there were no reported differences in intentions to rely on the human aid relative to the automation. Conclusion:The current study demonstrated that when individuals are provided information from both a human aid and automation, their reliance on the human aid decreased during high-risk decisions.Application: This study adds to a growing understanding of the biases and preferences that exist during complex human-human and human-machine interactions.
The present study examined the effects of mood on trust in automation over time. Participants (N = 72) were induced into either a positive or negative mood and then completed a computer-based task that involved the assistance of an automated aid. Results indicated that mood had a significant impact on initial trust formation, but this impact diminishes as time and interaction with the automated aid increases. Implications regarding trust propensity and trustworthiness are discussed, as well as the dynamic effects of trust over time.
No abstract
Purpose -The purpose of this paper is to present an empirical examination of the convergent validity of the two foremost measurement methods used to assess adaptive performance: subjective ratings and objective task scores. Predictors of adaptive performance have been extensively examined, but limited research attention has been directed at adaptability itself as a validated construct within the job performance domain. Due to this neglect, it is unclear if researchers can generalize findings across criterion measurement methods. Design/methodology/approach -Teams of five (275 individuals) performed a computer-based task that involved a series of disruptions requiring an adaptive response. In addition to post-disruption task scores, subjective self-and peer-ratings of adaptive performance were collected. Findings -Results did not indicate strong support for the convergent validity of subjective and objective measures. Although the measures were significantly related (r ¼ 0.47, p , 0.001) and shared a relatively similar correlation pattern in the multitrait-multimethod matrix, 78 percent of the variance between measures was unexplained. Research limitations/implications -Given the goal of understanding "job" performance, results should be confirmed for actual jobs where adaptive performance is imperative (e.g. emergency response, multicultural teams). Practical implications -These findings should serve as a warning that the construct validity of adaptive performance has yet to be fully established, and previous research results should be interpreted cautiously as generalizations about adaptive performance may be limited by the particular measures used to assess the construct. Originality/value -This study was unique in its examination of both subjective and objective measures of adaptive performance. The findings of the present study highlight the need for sound theory to support the adaptive performance construct.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.