Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human nor mere tools, we have sufficient functional, agency-based, appearance-based, social-relational, and existential criteria left to evaluate trust in robots. It is also argued that such evaluations must be sensitive to cultural differences, which impact on how we interpret the criteria and how we think of trust in robots. Finally, it is suggested that when it comes to shaping conditions under which humans can trust robots, fine-tuning human expectations and robotic appearances is advisable.