Purpose: This paper aimed to determine the liability for criminal activities committed by AI-enabled machines and explore defences that could invalidate their criminal liability. It also analysed the Actus Reus element, to identify which actors are involved in the criminal act.
Materials and Methods: A systematic review of existing research on AI liability in crime was conducted, focusing on 30 articles related to the study.
Findings: The study found that if certain conditions are met, any individual, company, or legal organisation can be held legally liable for illegal activities. As AI technology advances, current legal remedies are needed to protect society from the hazards it poses. Existing criminal law offers various approaches to dealing with AI liability, but the liability concerns generated by AI systems extend beyond traditional criminal law. Recognising robots as legal persons has been criticised as an overly complex solution.
Implications to Theory, Practice and Policy: The study emphasises that the responsibility for monitoring and managing AI and its operations begins from the moment it is employed or deployed. Criminal law and the criminalisation of behaviour only address the question of responsibility to a limited extent, and the responsibility for monitoring should be viewed as an obligation towards the law.