Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring these criticisms: if the concept of 'Trustworthy AI' is kept being used, we risk attributing responsibilities to agents who cannot be held responsible, and consequently, deteriorate social structures which regard accountability and liability. Nevertheless, despite suggestions to shift the paradigm from 'Trustworthy AI' to 'Reliable AI', I argue that, realistically, this concept will be kept being used. I end by arguing that, ultimately, AI ethics is also about power, social justice, and scholarly activism. Therefore, I propose that community driven and social justice-oriented ethicists of AI and trust scholars further focus on (a) democratic aspects of trust formation; and (b) draw attention to critical social aspects highlighted by phenomena of distrust. This way, it will be possible to further reveal shifts in power relations, challenge unfair status quos, and suggest meaningful ways to keep the interests of citizens.