Large language models have recently been heavily publicized with the release of ChatGPT. One of the uses of these artificial intelligence (AI) systems today is to power virtual companions that can pose as friends, mentors, therapists, or romantic partners. While presenting some potential benefits, these new relationships can also produce significant harms, such as hurting users emotionally, affecting their relationships with others, giving them dangerous advice, or perpetuating biases and problematic dynamics such as sexism or racism. This case study uses the example of harms caused by virtual companions to give an overview of AI law within the European Union. It surveys the AI safety law (the AI Act), data privacy (the General Data Protection Regulation), liability (the Product Liability Directive), and consumer protection (the Unfair Commercial Practices Directive). The reader is invited to reflect on concepts such as vulnerability, rationality, and individual freedom.