As technological advancements and lowered costs make self-driving cars available to more people, it becomes important to understand the dynamics of human-automation interactions for safety and efficacy. We used a dynamical approach to examine data from a previous study on simulated driving with an automated driving assistant. To maximize effect size in this preliminary study, we focused the current analysis on the two lowest and two highest-performing participants. Our visual comparisons were the utilization of the automated system and the impact of perturbations. Low-performing participants toggled and maintained reliance either on automation or themselves for longer periods of time. Decision making of high-performing participants was using the automation briefly and consistently throughout the driving task. Participants who displayed an early understanding of automation capabilities opted for tactical use. Further exploration of individual differences and automation usage styles will help to understand the optimal human-automation-team dynamic and increase safety and efficacy.
Risk has been a key factor influencing trust in Human-Automation interactions, though there is no unified tool to study its dynamics. We provide a framework for defining and assessing relative risk of automation usage through performance dynamics and apply this framework to a dataset from a previous study. Our approach allows us to explore how operators’ ability and different automation conditions impact the performance and relative risk dynamics. Our results on performance dynamics show that, on average, operators perform better (1) using automation that is more reliable and (2) using partial automation (more workload) than full automation (less workload). Our analysis of relative risk dynamics indicates that automation with higher reliability has higher relative risk dynamics. This suggests that operators are willing to take more risk for automation with higher reliability. Additionally, when the reliability of automation is lower, operators adapt their behavior to result in lower risk.
The decision process of engaging or disengaging automation has been termed reliance on automation, and it has been widely analyzed as a summary measure of automation usage rather than a dynamic measure. We provide a framework for defining temporal reliance dynamics and apply it to a data-set from a previous study. Our findings show that (1) the higher the reliability of an automated system, the larger the reliance over time; and (2) more workload created by the automation type does not significantly affect the operators’ reliance dynamics in high-reliability systems, but it does produce greater reliance in low-reliability systems. Furthermore, on average, operators with low performance make fewer decision changes and prefer to stick to their decision of using automation even if it is not performing well. Operators with high performance, on average, have a higher frequency of decision change, and therefore, their automation usage periods are shorter.
Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.