Designing trustworthy algorithmic decision-making systems is a central goal in system design. Additionally, it is crucial that external parties can adequately assess the trustworthiness of systems. Ultimately, this should lead to calibrated trust: trustors adequately trust and distrust the system. But the process through which trustors assess actual trustworthiness of a system to end up at their perceived trustworthiness of a system remains underexplored. Transferring from psychological theory about interpersonal assessment of human characteristics, we outline a “trustworthiness assessment” model with two levels. On the micro level, trustors assess system trustworthiness utilizing cues. On the macro level, trustworthiness assessments proliferate between different trustors – one stakeholder’s trustworthiness assessment of a system affects others’ trustworthiness assessments of the same system. This paper contributes a theoretical model that advances understanding of trustworthiness assessment processes when confronted with algorithmic systems. It can be used to inspire system design, stakeholder training, and regulation.