Extra-contractual Liability in Robotics

The European Parliament adopted a resolution with suggestions in 2017 (Parliament’s Resolution) inviting the European Commission to present a proposal for a directive on civil law rules for robots, paving the door for a specific “Law of the Horse Robot.” In its second paragraph, the resolution boldly states that because humankind stands on the threshold of an era when ever more sophisticated robots, bots, and other manifestations of artificial intelligence (‘AI’) seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider its legal and ethical implications and effects, without stifling innovation. In particular, the resolution asserts that existing liability laws are insufficient in certain instances, such as when a robot can learn and make autonomous judgments, and when machines directly conclude and implement contracts.
Defining Artificial Intelligence and Robot. Artificial Intelligence (AI) can be broadly split into three types: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). Today’s AI deployment falls under the category of ANI, which can be defined as the ability of machines to resemble human capabilities in narrow domains. AGI, in contrast, is the holy grail of current AI research. An AGI would have human capabilities across a number of domains. As such, it would be a true human-equivalent AI. ASI is a whole different category altogether. It is an intelligence that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This is the level of AI that worries people. Once AGI is accomplished, and especially with ASI, the question of whether the AI has reached consciousness, or self-awareness, will arise, prompting both ancient and modern debates about what it means to be “human,” to have free choice, and to be entitled to complete, constitutional personhood. Artificial intelligence will eventually transform into artificial life. However, these specific difficulties, as well as ASI’s unpredictable impact on society, are outside the purview of this study.
In some ways, robots are the physical embodiment of AI. AI can generate information since it is solely computational, but it cannot physically impact its environment. It’s just a phantom without a shell. For the purposes of legal investigation, the sense-think-act definition of robots is a typical one. The Parliament’s Resolution, too, recommends establishing a common definition and classification of “smart robots,” based on the sense-think-act characteristics of i. the capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data; ii. the capacity to learn through experience and interaction; iii. the form of the robot’s physical support; iv. the capacity to adapt its behavior and actions to the environment. The Parliament’s definition also captures the essence of what “think” actually means. It is the ability to learn, to the point that a robot’s behavior cannot be predicted in advance because the outcome will be determined by the robot’s “thinking” process. A robot following the rule “turn left when forward motion is blocked” may detect the impediment and act on it, but it does not consider what it has detected. It has already been programmed. Thinking necessitates the robot’s ability to analyze what it has sensed in order to determine how to respond.
Some features of robots, such as embodiment, emergence and social valence are exceptional for the current legal system. Embodiment is the consequence of a physical body being able to act on information and directly affect the world at large. The embodiment represents a challenge for the law because the intangible (i.e., information) generally does not give rise to actions under product liability. the U.S. case of Winter v. G.P. Putnam’s Sons, in which the court held that two mushroom pickers who were poisoned as a result of relying on wrong information in an encyclopedia on mushrooms could not claim damages from the publisher based on product liability. Product liability requires a defect in a tangible product. However, the physical product, i.e., the book, was not defective; its intangible content was. The embodiment muddies the border between informing and acting. The book in Winter merely informed. It did not act; it was the humans that picked and prepared the mushrooms. Embodied information can in some cases give rise to a product liability claim. As the court in Winter states, aeronautical charts have been held to be products for the purpose of product liability law. Ultimately, however, the court differentiates between aeronautical charts and books such as the encyclopedia by drawing an analogy between charts and a compass, explaining that both are tools that may be used to guide an individual where knowledge of natural features is required. In contrast, a book such as the mushroom encyclopedia is rather “as a book on how to use a compass or an aeronautical chart” and as such represents “pure thought and expression,” i.e., the intangible. We can imagine a mushroom picking robot with ANI capabilities. It identifies and picks edible mushrooms. Alas, it mistakenly picks a poisonous mushroom for the family it serves, who, after partaking of the tasty but ill-fated dinner, suffers severe discomfort. If a court likens an aeronautical chart to a tool for product liability purposes, it is not a stretch to believe that the same might occur for a robot that faultily picks a poisonous mushroom—at least regarding the embodiment of information.
Emergence, in this case, is used nearly the same term as autonomy, however emergence is the intelligent, higher-level thinking displayed by robots. The Parliament’s Resolution defines a robot’s autonomy as “the ability to make decisions and implement them in the outside world, independently of external control or influence; whereas this autonomy is of a purely technical nature and its degree depends on how sophisticated a robot’s interaction with its environment has been designed to be.” The extraordinary bridges built by army ant colonies are perfect examples of emergent higher-level behavior. An individual ant will (i) slow down when it reaches a gap it cannot cross, and (ii) freeze in place as soon as a still faster-moving ant from behind walks over it. The individual ant’s behavior is very simple, but the resultant bridges of interlinked ant bodies are extraordinarily complex. Swarms of robots have been programmed to exhibit similar behavior. The ant army example reveals the difference between autonomy and emergence. The individual ant acts autonomously while the colony displays emergent, higher-level qualities. The question is then: When does a robot pass the threshold from tool to autonomous action? We distinguish between tools used by humans and tools (machines) deployed by humans. Deployed tools can operate in circumstances unforeseen by the machine’s creators precisely because they are autonomous. As will be discussed below, autonomy makes it difficult to ask to whom liability shall fall for harm caused by an autonomously acting robot. Autonomy also leads us back to the sense-think act paradigm mentioned as part of the definition of “smart robot.” Consequently, we believe that the autonomy of robots is more important from a legal perspective than emergence is. Emergence still plays a role, but in our opinion rather with regards to AI and robots being generative technologies, with multiple layers of hardware, software, and protocols embedded within a social layer.
Social valence is the fact that robots elicit emotional responses from people in a social context. Put differently, we anthropomorphize robots. Anthropomorphizing robots indicates that people are willing to substitute [robots] for animals or human beings in certain contexts and for certain purposes. The substitution of a robot for a living thing may be innocent, emotional, almost instinctual. The patient who blames a surgical robot for botched procedure projects partial humanity – and hence responsibility – onto the technology. The soldier who mourns the loss of his bomb-disarming robot projects onto the robot-human qualities of comradeship, courage, and commitment to fellow soldiers. When a companion robot who operates in our home sends personal data about us to a corporation, we feel betrayed, when we would never think that a camera and a microphone could betray us. The consequence is that a layperson might “feel” that the law should hold the robot liable. And, as the rights of minorities show, the law frequently develops by societal demands.
The extra-contractual liability of the robot
The Parliament’s Resolution foresees that a robot’s autonomy will cause difficulty in attributing liability for the robot’s actions because: Whereas the more autonomous robots are, the less they can be considered to be simple tools in the hands of other actors (such as the manufacturer, the operator, the owner, the user, etc.); whereas this, in turn, questions whether the ordinary rules on liability are sufficient or whether it calls for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of robots where the cause cannot be traced back to a specific human actor and whether the acts or omissions of robots which have caused harm could have been avoided.
Solving this problem is seen as critical by the Parliament, which suggests as solutions to investigate: a compulsory insurance scheme (such as currently exists for cars); a compensation fund for damages not covered by any insurance; the consideration of limited liability for the actors involved in the “making” of the robot as a benefit for contributing to a compensation fund; a robot registry with all relevant details on liability (similar to companies registry); and even a specific legal status/personhood for certain categories of robots, which would elevate the robot to being able to be held directly liable. In order to understand whether the Parliament’s suggestions are necessary, we will look at how the law handles extra-contractual liability where one autonomous being acts on behalf of or under the direction of another, namely regarding agents, minors and others under supervision, slaves, and animals. Firstly, some general principles of tort law. As tort law differs considerably from jurisdiction to jurisdiction, we will use the Principles on European Tort Law (PETL) as a basis against which to assess the impact robots may have on tort law, though with some “peeks” at German law in order to anchor the principles to a concrete jurisdiction or where the PETL do not address the issue. The PETL was drafted by the European Group on Tort Law (EGTL), a network of academics that used hypotheticals and country and comparative reports to identify principles common to both common law and civil law European jurisdictions. Liability requires the elements of damage to a legally protected interest; causation for the damage by an activity, which could be either an act or omission; and a recognized base for the liability. Causation and the base of liability are the relevant elements for the purposes of this section; damage will generally not be an issue. Causation is given where activity is conditio sine qua non for the damage, i.e., where the damage would not have occurred in the absence of the activity. As this element is too encompassing, it is limited by requiring a base of liability. The bases are fault; an abnormally dangerous activity; and auxiliary acting on behalf of the person deemed liable, or, more generally, being in charge of another. “Fault” is the intentional or negligent breach of the required standard of conduct; this can also encompass product liability resulting from any deviation from standards that are reasonable to be expected. With regards to product liability, the burden of proof is reversed; the manufacturer must prove that the required standard of conduct was met to not be liable. An abnormally dangerous activity gives rise to strict liability; an activity is deemed abnormally dangerous where there is a foreseeable and highly significant risk of damage even when all due care is given, and where the activity is not a common one.
To whom liability shall fall. Public policy touches upon a vastly wider realm of issues than just law. Regulation can guide societal developments, enabling and encouraging some while disabling others. This broader approach also applies to the liability policy. From a societal perspective, liability generally has two goals: a corrective function of remedying harms and an incentive function of preventing harm. The corrective function can fail by misdirecting liability, i.e., attributing it to the wrong actor. Harm may be over-corrected insofar that strict liability will be pushed onto the manufacturer of the robot, regardless of whether the harm is one that could be expected and whether the manufacturer met their duty of care. It may also be under-corrected in that harm may end up uncompensated, for example, because it is impossible to determine specifically what led to the harm or under who’s charge the robot was acting. On the incentive side, an excessive emphasis on preventing harm beyond the reasonable could chill innovation. Our investigation above of general tort principles revealed some actors to whom liability could fall: to the robot itself, to the manufacturer, or to the owner/supervisor. In addition, it could also fall to no one; the damage would be fully sustained by the harmed. Which of these possibilities (and there are many more) is the most suited must be assessed against best achieving liability’s two goals? Let us start with the last-mentioned possibility of not holding anyone liable. This approach would fail both to correct any harms or incentivize the reduction of harms. As such, it should be rejected. Liability falling to the robot itself would require that it be granted legal personhood. There is no hindrance to such a step. However, in order to be able to correct any harm, a robot would need a source of capital in order to pay damages—here, a personal fund tied to the robot could be one solution. Depending on its purpose, it could also serve the harmed party instead of paying monetary damages. Holding the robot itself liable could also meet the incentive function. For this, the robot would need to learn from the situations where it is held liable and adjust its behavior accordingly. Liability may also be accorded to the owner/supervisor of the robot. This approach would mirror more closely that of how the law currently responds to harms by autonomous entities, such as animals and agents. However, a user will rarely have insight into the robot’s working as would the manufacturer, which is an argument against this approach. Moreover, while the goal of remedying harms would be met, the incentive function would only be met where the market leads to such an outcome, either through competition among manufacturers or because robot owners would have some sort of recourse against the manufacturers. Picking upholding the manufacturer liable, we need to differentiate between the manufacturer of the robot and, where distinct, potentially the manufacturer of the AI as a component of the robot. Recognizing AI as a component is key for this approach. It is the robot manufacturer that combines and integrates the disparate parts into something that is more than their sum. It is also the robot manufacturer that is best situated to fully understand the complexities of the robot. The AI manufacturer will not necessarily control how the AI will be integrated into the robot. Should the robot manufacturer be able to determine that it was the AI component that was the cause for any harm, it could still seek contractual regress. In the EU, current product liability laws would make the manufacturers of the robot and of the AI jointly and severally liable. Despite the complexities of the robot system and the difficulties of other parties in determining the cause for harm, it seems that existing liability patterns (e.g., strict liability or product liability) might be able to achieve the goals of harm correction and reduction that traditionally define the teleological horizon of the civil liability regimes.

References:
1. Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513 (2015)https://digitalcommons.law.uw.edu/cgi/viewcontent.cgi?article=1022&context=faculty-articles
2. European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html
3. Wilhelm WINTER; Cynthia Zheng, Plaintiffs-Appellants, v. G.P. PUTNAM’S SONS, Defendant-Appellee. No. 89-16308. United States Court of Appeals, Ninth Circuit. 1991 https://h2o.law.harvard.edu/cases/5449
4. European Group on Tort Law – Principles of European Tort Law – Text and Commentary. 2005 https://drive.google.com/file/d/11gaEReh1kg4SA04kWIBWVa3iFbR1B1Jd/view?usp=sharing

Leave a Comment