The fact that robots, especially self-driving cars, have become part of our daily lives raises novel issues in criminal law. Robots can malfunction and cause serious harm. But as things stand today, they are not suitable recipients of criminal punishment, mainly because they cannot conceive of themselves as morally responsible agents and because they cannot understand the concept of retributive punishment. Humans who produce, program, market, and employ robots are subject to criminal liability for intentional crime if they knowingly use a robot to cause harm to others. A person who allows a self-teaching robot to interact with humans can foresee that the robot might get out of control and cause harm. This fact alone may give rise to negligence liability. In light of the overall social benefits associated with the use of many of today’s robots, however, the authors argue in favor of limiting the criminal liability of operators to situations where they neglect to undertake reasonable measures to control the risks emanating from robots.
While the classic approach to transnational law provides a valuable tool for identifying the legal frameworks governing transborder occurrences, it falls short of covering all relevant aspects of transnational criminal law (TCL). This article argues that criminal law -unlike other areas of law -is fundamentally a state-oriented concept, leading to unique problems when implemented across state borders, especially for the individual facing penal power. A theoretical concept of TCL must therefore not only map extensions of state powers from high above, but also look for the individual's position in the possibly overlapping normative orders on the ground. The current predominant bird's-eye view must be modified according to the worm's-eye view. In doing so, the specific features and resulting problems of TCL will emerge. From this modified point of view, a main challenge is the establishment of a globally recognised coordination scheme, which will protect the legal position of individuals -particularly defendants -affected by states exercising their ius puniendi across borders.
I. Intelligente Agenten-Potential und Risiko "Schuld ist der Algorithmus", meinte die Presse 1. Die Ehefrau eines früheren Bundespräsidenten hatte sich rechtlich dagegen zur Wehr gesetzt, dass ihr Name, wenn man ihn bei "google-search" eingab, automatisch mit Begriffen wie "Prostitution" oder "Escort Service" verbunden wurde. Diese Ergänzungen, die die Klägerin als beleidigend empfand, beruhten allerdings nicht auf individuellen Entscheidungen von Personen im Bereich der beklagten Firma, sondern auf den Funktionen "google-bot" und "google-autocomplete", die nach allgemein festgelegten Handlungsvorschriften Nutzeranfragen verarbeiten 2. Beide sind Beispiele für so genannte Intelligente Agenten 3 : Die Funktionen operieren nach bestimmten vorgegebenen Regeln, aber sie verarbeiten die Informationen (das Suchverhalten der Nutzer von Google) in jedem Einzelfall selbständig. Intelligente Agenten werdenin ihren unterschiedlichen Formen 4dort eingesetzt, wo die rasche Verarbeitung sehr vieler Informationen präzise Kombination und schnelle Reaktion erfordert, oder auch dort, wo der Einsatz physischer Kräfte gefordert ist, die die Möglichkeiten von Menschen übersteigen. Intelligente Agenten gibt es heute schon in vielen Lebensbereichen: Sie bestimmenin der recht einfachen Form eines Softwareagentenals Suchmaschinen im
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.