s (2024, this issue) text serves as a timely contribution to put into perspective recent developments in the field of technology-mediated language teaching and learning. It also provides a relevant framework to reflect on both teachers' and learners' digital literacy that can be defined as "a range of abilities of a person, such as searching, evaluating, creating, and sharing digital content through digital technology, and represents the person's ability to apply technology critically [emphasis added]" (Pathiranage & Karunaratne, 2023, p. 2).Kern's (2024, this issue) reflection comes in the aftermath of the emergency remote teaching that was imposed by the COVID-19 pandemic, when even the most techno-reluctant additional language (Lx) teachers had to explore the affordances of digital tools and improvise a bricolage of online solutions in order to provide continuing learning opportunities. However, is this episode of forced professional growth just a parenthesis that some teachers are eager to close so that they can return to what they consider tried-and-tested teaching, or has it transformed language pedagogy for good? Pathiranage and Karunaratne's (2023) literature review of teachers' agency in technology for education reveals that the COVID crisis acted as an "eye-opener" for institutional stakeholders and also made teachers acknowledge "their limited ability to effectively manage technology integration (. . .) beyond the commonly used basic tools" (p. 9).Following the COVID crisis, new digital tools-especially generative artificial intelligence (AI) applications based on large language models and machine translation (MT)-are becoming commonplace among students, thanks to their accessibility, immediacy, and usability. At first sight, the conjunction of these three characteristics empowers language learners in an unprecedented manner as the use of these tools seems to necessitate straightforward copy-and-paste processes that effortlessly speed up writing tasks. What teacher has not recently annotated some segments of a student assignment with this question: "Is that you writing or ChatGPT?" In most cases, generative AI applications generate texts seemingly so balanced, polished, and devoid of any enunciative anchoring that the machine is easily identifiable. Yet, what happens when the bulk of the text is produced by Chat-GPT but has been attentively reworked and edited by students, thus blurring the line between human performance and machine output? For the time being, tools like ChatGPT or Google Translate are generally used by learners without guidance. Indeed, according to the French students surveyed by Bourdais and Guichon (2020), a tiny minority of language teachers advise or even train students in the use of MT, while a considerable proportion forbid their use. This triggers a cat-and-mouse game