In supervisory control, operators are expected to monitor automation and to intervene if there is an opportunity to improve system productivity or if faults develop which cannot be managed by the automation. Central to how humans interact with automation is the degree to which they trust the system to perform well and handle unforeseen events. This paper summarises recent laboratory experiments and theoretical models, both quantitative and qualitative, of the dynamics of trust between humans and machines and discusses the calibration of trust and the problem of allocating responsibility for control between human and machine.