This chapter focuses on the implications of the improving generative-AI ‘chatbot' technologies and the inevitable unreliability of attendant AI-text detection technologies. The goal of generative-AI programmers is to design AIs which produce text indistinguishable from typical human-written text: an eventuality that will render AI-text detectors redundant. The authors outline the underpinning mathematics of AI-generated and human-written text as the basis of AI-text detection, and how this leads to inherent inaccuracies and uncertainties in AI-text detection. The chapter proceeds to overview on how institutions will have to work with both the growth in use of AI and the unreliability of AI-text detection: institutions cannot avoid AI and cannot rely on 'tech' to police it. Students need to be taught how to use AIs ethically with integrity and insight and sanctioned when they do not. At the same time, institutions need to resource people to investigate students suspected of false authorship, whether commissioning a human ghost-writer or using an AI inappropriately.