Healthcare robots enable practices that seemed far-fetched in the past. Robots might be the solution to bridge the loneliness that the elderly often experience; they may help wheelchair users walk again, or may help navigate the blind. European Institutions, however, acknowledge that human contact is an essential aspect of personal care and that the insertion of robots could dehumanize caring practices. Such instances of human-robot interactions raise the question to what extent the use and development of robots for healthcare applications can challenge the dignity of users. In this article, therefore, we explore how different robot applications in the healthcare domain support individuals in achieving 'dignity' or pressure it. We argue that since healthcare robot applications are novel, their associated risks and impacts may be unprecedented and unknown, thus triggering the need for a conceptual instrument that is binding and remains flexible at the same time. In this respect, as safety rules and data protection are often criticized to lack flexibility, and technology ethics to lack enforceability, we suggest human dignity as the overarching governance instrument for robotics, which is the inviolable value upon which all fundamental rights are grounded.