Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers’ trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers’ trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers’ performance on a non-driving-related task. We conducted a study ($$n=80$$
n
=
80
) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers’ trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers’ trust levels to mitigate both undertrust and overtrust.
Trust in automated driving systems is crucial for effective driver-(semi)autonomous vehicles interaction. Drivers that do not trust the system appropriately are not able to leverage its benefits. This study presents a mixed design user experiment where participants conducted a non-driving task while traveling in a simulated semiautonomous vehicle with forward collision alarm and emergency braking functions. Occasionally, the system missed obstacles or provided false alarms. We varied these system error types as well as road shapes, and measured the effects of these variations on trust development. Results reveal that misses are more harmful to trust development than false alarms, and that these effects are strengthened by operation on risky roads. Our findings provide additional insight into the development of trust in automated driving systems, and are useful for the design of such technologies. CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models.
Automated vehicles (AVs) that intelligently interact with drivers must build a trustworthy relationship with them. A calibrated level of trust is fundamental for the AV and the driver to collaborate as a team. Techniques that allow AVs to perceive drivers' trust from drivers' behaviors and react accordingly are, therefore, needed for context-aware systems designed to avoid trust miscalibrations. This letter proposes a framework for the management of drivers' trust in AVs. The framework is based on the identification of trust miscalibrations (when drivers' undertrust or overtrust the AV) and on the activation of different communication styles to encourage or warn the driver when deemed necessary. Our results show that the management framework is effective, increasing (decreasing) trust of undertrusting (overtrusting) drivers, and reducing the average trust miscalibration time periods by approximately 40%. The framework is applicable for the design of SAE Level 3 automated driving systems and has the potential to improve the performance and safety of driver-AV teams.
Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.