Transparency in artificial intelligence (AI) can mean many things, but at the same time, it is currently a central focus for both scientific and regulatory attention. We seek to critically unpack this conceptual vagueness. This is particularly called for given recent focus on transparency in much of AI policy. To this end, we construct our analysis of AI Transparency into four facets. Firstly, (1) explainability (XAI) has become an expanding field in AI, which we argue needs to be complemented by more explicit focus on the (2) mediation of AIsystems functionality, as a communicated artefact. Furthermore, in the policy discourse on AI, the importance of (3) literacy is underscored. We draw from the rich literacy literature in order to show both promising and troubling consequences of this. Lastly, we unpack transparency as a form of governance, within a (4) legal framework encompassing a structure of trade-offs. By these four facets we aim to bring more clarity to the multifaceted concept of transparency in AI.