Modelling humans' trust in robots is critical during human-robot interaction (HRI) to avoid under-or over-reliance on robots. Currently, it is challenging to calibrate trust in real-time. Consequently, we see limited work on calibrating humans' trust in robots in HRI. In this paper we describe a mathematical model that attempts to emulate the three-layered (initial, situational, learned) framework of trust capable of potentially estimating humans' trust in robots in real-time. We evaluated the trust model in an experimental setup that involved participants playing a trust game on four occasions. We validate the model based on linear regression analysis that showed that the trust perception score (TPS) and interaction session predicted the trust modelled score (TMS) computed by applying the trust model. We also show that TPS and TMS did not change significantly from the second to the fourth session. However, TPS and TMS captured in the last session increased significantly from the first session. The described work is an initial effort to model three layers of humans' trust in robot in a repeated HRI setup and requires further testing and extension to improve its robustness across settings.
CCS CONCEPTS• Human-centered computing → Human Robot interaction ; User studies; • Computer systems organization → Robotics.