Research on the potential use of ChatGPT for anonymizing texts in government organizations is scarce. This study examines the possibilities, risks, and ethical implications for government organizations to employ ChatGPT in the anonymization of personal data in text documents. It adopts a case study research approach, employing informal conversations, formal interviews, literature review, document analysis, and experiments. The experiments using three types of texts demonstrate ChatGPT's proficiency in anonymizing diverse textual content. Furthermore, the study provides an overview of significant risks and ethical considerations pertinent to ChatGPT's use for text anonymization within government organizations, related to themes such as privacy, responsibility, transparency, bias, human intervention, and sustainability. The current form of ChatGPT stores and forwards inputs to OpenAI and potentially other parties, posing an unacceptable risk when anonymizing texts containing personal data. We discuss several potential solutions to address these risks and ethical issues. This study contributes to the scarce scientific literature on the potential value of employing ChatGPT for text anonymization in government settings. It also offers practical insights for civil servants coping with the challenges of personal data anonymization, emphasizing the need for the cautious consideration of risks and ethical implications in the integration of AI technologies.