Recent years have seen an increase in artificial intelligence (AI) capabilities and incidents. Correspondingly, there has been an influx of government strategies, panels, dialogues and policy papers, including efforts to regulate and standardize AI systems [12,20,37,52]. A first step in most of these efforts is to delineate the scope of the resulting document, typically by either outlining a range of standard technical definitions of AI [76,85] or referencing existing scholarly work [73]. After defining their scope, many policy documents published by governments delve deeper into the 'type' of AI they wish to solicit from industry players and deploy nationally or globally. This largely serves to ensure that the strategies, policy discussions and AI-related milestones sketched within these documents are guided by a 'north star' , or overarching goal. The north star should be comprehensible to all who read and implement the document. Describing the north star allows a non-technical audience to follow and partake in the relevant policy discussions, though it does not replace technical definitions. Although more could be said as to why this is being done and whether it is sensible, such discussion is outside the scope of this paper. Instead, I focus on and contextualize some of these 'north star' definitions themselves. In particular, I explore one of the most prominent recent descriptions: the EU's concept of "trustworthy AI. " I explain its background, its international effects and its drawbacks in more depth. What is in a name? What is in "trustworthy AI?".