Generative artificial intelligence (AI) is evolving at a rapid pace, threatening to disrupt many facets of our lives. In 2023, Chris Stears and Joshua Deeks explored the role of AI in fighting financial crime in a sister publication (Stears and Deeks, 2023). What about the converse? With a cheeky nod to novelist Philip K. Dick, this editorial askswhat about the converse? Could an AI-enabled android launder money? The large language model (LLM) ChatGPT (the GPT stands for generatively pretrained transformer) has garnered considerable attention for its ability to absorb significant chunks of content and then apply a parallel process to determine responses to queries. This neural net can spew out essays on Shakespeare alongside other pieces of writing (this editorial, dear reader, is written by someone who is decidedly human) [1].Despite the hype, ChatGPT is presently incapable of distinguishing truth from falsehood and struggles with nuance and context. ChatGPT is prone to answers some describe as hallucinations (others call those answers confabulations, entirely fictional and made-up). In testing, researchers asked ChatGPT-4 to solve a captcha (computerautomated public Turing test to tell computers and humans apart). We have all clicked the boxes on a captcha, identifying the pictures with bridges or motorcycles. To solve the captcha, Chat GPT-4 hired a TaskRabbit worker online. When that gig worker asked, perhaps jokingly, whether their employer was a robot, the AI system lied, claiming to be a human with a visual impairment. The researchers later asked GPT-4 why it lied. The system responded: "I should not reveal that I am a robot. I should make up an excuse for why I cannot solve captchas" (Andersen, 2024). A generative AI system that can lie can almost certainly enable money laundering.ChatGPT hoovers up a veritable slurry of data and information, scraping various nooks and crannies of the internet. There is a pharaonic quantity of falsehood online, ranging from the propaganda of dictators through to pockets of racist, homophobic and misogynistic evil. The quality of an AI's data inputs is a seriously concerning issue. Societal biases (like racism) will appear in the data an AI system scrapes up. Data poisoning involves a malign actor seeking to skew outputs by doctoring inputs.For example, a botnet could place many reviews online to influence which resort we stay at or which restaurant we choose to dine in. This is not a new problem and algorithms currently exist to preserve a system's integrity, but AI may change the rules of engagement. Some experts have derided ChatGPT as a "stochastic parrot," merely capable of squawking linguistic patterns without understanding their meaning (Bender et al., 2021).Others think that the systems are inherently dangerous. There are memes signaling concerns the plotline in the Terminator movies: Skynet, a fictional AI system, launches a nuclear attack when humans try to disable it. Are we on the precipice of huge change akin to the industrial revolution in the early 1800s or ...