With the rise of generative AI, there has been a recent push for disclosing whether content is produced by AI. However, it is not clear what the right terms are to use for such labels. Even the answer to the question of what counts as AI-generated is not clear, since there exists a continuum of different types of content with varying levels of algorithmic intervention. Furthermore, it is unclear how members of the general public understand various terms used in conjunction with generative AI. In this paper, we therefore investigate how the public understands the mapping between ten potential labeling terms and 15 different types of content that vary in the extent to which they are AI generated, and the extent to which they are potentially misleading. To do so, we conduct a study using N=2038 Americans (quota-sampled to the national distribution on age, gender, ethnicity, and geographic region), as well as replications with samples from Mexico and Brazil (with translated terms, N=825 and 839, respectively; quota-matched to the national distributions on age and gender). Participants are randomly assigned one of the ten terms, and then indicate which pieces of content they would consider that term as applying to. We ask which terms are most effective in terms of classification accuracy (i.e. are applied to content that should be labeled and not applied to content that should not be labeled). We find that different terms satisfy different labeling goals. Across all three countries, ``AI Generated'' is one of the terms most consistently associated by participants with content that is generated using AI, regardless of whether that content is misleading - and thus is not consistently associate with misleading content. Conversely, ``Manipulated'' and ``Not real'' were two of the terms most consistently ascribed to content that is misleading, regardless of whether the misleading content was generated using AI - and thus is not consistently associated with content that was AI generated. Interestingly, ``Artificial'' does fairly well at both classification tasks in the US and Brazil (although less so in Mexico). Finally, while labels using any of the ten terms evoked at least some amount of negative feeling towards the content and the poster among US participants, numerous terms (including ``AI generated'' and ``Artificial'') had positive associations on average for participants in Brazil and Mexico. These results have important implications for how and where generative AI disclosure is implemented, and suggest that platforms and civil society must decide carefully what their objective is for such disclosures.