People who design, use, and are a ected by autonomous arti cially intelligent agents want to be able to trust such agents -that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and in uence human trust in arti cially intelligent agents. However, these approaches are typically ad hoc, and have not been formally related to each other or to formal trust models. is paper presents a survey of algorithmic assurances, i.e. programmed components of agent operation that are expressly designed to calibrate user trust in arti cially intelligent agents. Algorithmic assurances are rst formally de ned and classi ed from the perspective of formally modeled human-arti cially intelligent agent trust relationships. Building on these de nitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent's core functionality, with seven notable classes ranging from integral assurances (which impact an agent's core functionality) to supplemental assurances (which have no direct e ect on agent performance).Common approaches within each of these classes are identi ed and discussed; bene ts and drawbacks of di erent approaches are also investigated.