Criminal and Civil liability in artificial intelligence (Author: Gamad Hasanov)

Introduction

“Computers can only issue mandatory instructions-they are not programmed to exercise discretion.” 1

This statement was made by a New York court fifty-five years ago, and it is no longer strictly true. What was once the stuff of science fiction is now science fact: computers capable of solving problems by replicating human thought processes.2 ‘As a result of the use of artificial intelligence (hereinafter AI)3,’ programs that provide tax advice4,’ diagnose medical conditions5,” and configure computers6 are now available.”With continued research, the scope of Al programs will broaden 7, and as it penetrates markets for critical services and processes, the risk of disaster will likely increase 8.’ When conventional software programs have resulted in commercial jet near-misses, nuclear power plant shutdowns, and a missile alert triggered by a false indication of a world war 9, “Additional issues are easily foreseen if a program mimicking human thought is used. Furthermore, as systems become more complex, failure becomes unavoidable. 10” Who is responsible if such a failure occurs? The answer is ambiguous.

The impact of an AI program error has not yet been considered by the courts, and no case law exists to provide guidance to the software developer, vendor, or user regarding their potential liability. Furthermore, there are no laws addressing this issue. This comment proposes a liability standard that eliminates this uncertainty.

This comment first discusses the differences between conventional and AI software, 2 including the subset of AI known as expert systems, ” and then presents various liability theories and how they relate to Al.  Finally, a proposal for applying a strict liability standard to AI will be presented.

2. Criminal liability in artificial intelligence

The following references are primarily to US law; however, many other jurisdictions have similar legislation in the relevant areas.

Gabriel Hallevy discusses whether and how artificial intelligent entities could be held criminally liable in 11. Criminal laws typically require both an actus reus (an action) and a mens rea (a mental intent), and Hallevy categorizes laws as follows:

1. Those in which the actus reus is an action, and those in which the actus reus is a failure to act;

2. Those in which the mens rea requires knowledge or being informed; those in which the mens rea only requires negligence (“a reasonable person would have known”); and strict liability offences in which no mens rea is required.

Hallevy then proposes three legal models for considering AI-related offences:

  1. Perpetrator through another. If a mentally deficient person, a child, or an animal commits an offence, the perpetrator is considered an innocent agent because they lack the mental capacity to form a mens rea (this is true even for strict liability offences). However, if the innocent agent was directed to attack someone by another person (for example, if the owner of a dog-directed his dog to attack someone), the instructor is held criminally liable (see 12 for US case law).

According to this model, AI programs could be considered an innocent agent, with the software programmer or the user being considered the perpetrator-via-another.

  • Natural-probable-result. In this model, a portion of the AI program designed for good is activated inappropriately and commits a criminal act. Hallevy provides an example (quoted from 13) of a Japanese employee of a motorcycle factory being killed by an artificially intelligent robot working nearby. The robot incorrectly identified the employee as a threat to its mission and calculated that pushing him into an adjacent operating machine would be the most efficient way to eliminate this threat. The robot smashed the surprised worker into the machine with its very powerful hydraulic arm, killing him instantly, and then resumed its duties.

The usual legal application of “natural or probable consequence” liability is to prosecute co-conspirators in a crime. If no conspiracy can be established, it is still possible (under US law) to hold an accomplice legally liable if the perpetrator’s criminal acts were a natural or probable consequence (a phrase coined by 14) of a scheme that the accomplice encouraged or aided 15, as long as the accomplice was aware that some criminal scheme was in progress.

As a result, users or (more likely) programmers may be held legally liable if they were aware that a criminal offence was a natural, probable result of their programs/use of an application. However, the application of this principle must distinguish between AI programs that ‘know’ that a criminal scheme is underway (i.e., they were programmed to perform a criminal scheme) and those that do not (they were programmed for another purpose). It is possible that crimes requiring the knowledge for the mens rea cannot be prosecuted for the latter group of programs (but those requiring a ‘reasonable person’ mens rea, or strict liability offences, can).

3. Direct liability. This model assigns an AI system both actus reus and mens rea.

Attributing an actus reus to an AI system is a simple process. The actus reus of an offence occurs when a system takes an action that results in a criminal act or fails to take an action when there is a duty to act.

Assigning a mens rea is much more difficult, and it is here that the three levels of mens rea come into play. It may be possible to hold AI programs criminally liable for strict liability offences, where no intent to commit an offence is required. Taking self-driving cars as an example, speeding is a strict liability offence; thus, according to Hallevy, if a self-driving car is found to be exceeding the speed limit for the road it is on, the law may well assign criminal liability to the AI program that was driving the car at the time.

This possibility raises a number of other issues that Hallevy addresses, such as defences (could a malfunctioning program claim a defence similar to the human defence of insanity? Or, if it is infected with an electronic virus, could it invoke defences akin to coercion or intoxication? ); and punishment (who or what would be punished for an offence for which an AI system was directly liable?).

  • Civil liability in AI
  • Negligence

When software fails or a party is injured as a result of using software, the resulting legal proceedings usually allege negligence rather than criminal liability 16. Gerstner 17 discusses the three elements that must typically be demonstrated in order for a negligence claim to succeed:

1. The defendant owed the plaintiff a duty of care; 2. The defendant breached that duty; and 3. The breach caused the plaintiff harm.

Regarding point 1, Gerstner suggests that there is little doubt that a software vendor owes a duty of care to the customer, but determining what standard of care is owed is difficult.

On point 2, Gerstner suggests several ways in which an AI system could violate the duty of care: errors in the program’s function that could have been detected by the developer; an incorrect or inadequate knowledge base; incorrect or inadequate documentation or warnings; failing to keep the knowledge up to date; the user supplying faulty input; the user relying unduly on the output; or using the program for an incorrect purpose.

In terms of point 3, the question of whether an AI system can be considered to have caused an injury is also debatable. Perhaps the most important question is whether the AI system recommends action in a given situation (as many expert systems do) or takes action (as self-driving and safety-equipped cars do) Causation is difficult to prove in the former case because there must be at least one other agent involved; in the latter case, it is much easier.

Gerstner also discussed a legal exception for “strict liability negligence” in the United States. This applies to products that cause injury when used in a normal, intended, or reasonably foreseeable manner and are defective or unreasonably dangerous (as opposed to economic loss). She debates whether the software is a ‘product’ or merely a ‘service,’ citing a case in which electricity was held to be a product 18, and thus leans toward defining software as a product rather than a service. Assuming that software is a product, it is the responsibility of AI system developers to ensure that their systems are free of design flaws, manufacturing flaws, and inadequate warnings or instructions.

Cole 19 goes into greater detail about whether the software is a product or a service. His conclusion is that treating AI systems as products is “partially applicable at best,” and he prefers to think of AI as a service rather than a product; however, he acknowledges that the law in this area is ambiguous.

Cole cites some case law on the “duty of care” that AI systems must uphold:

  1. In 20, a school district filed a negligence claim against a statistical bureau that (allegedly) provided inaccurate calculations of the value of a burned-down school, resulting in an underinsured loss for the school district. The duty under consideration was the obligation to provide information with reasonable care. The court considered factors such as: the existence, if any, of a guarantee of correctness; the defendant’s knowledge that the plaintiff would rely on the information; the limitation of potential liability to a small group; the lack of proof of any correction once discovered; the unfavourability of requiring an innocent party to bear the burden of another’s professional mistakes; and, the promotion of cautionary techniques among informational (tool) providers.
  2. Cole discusses the duty to draw reasonable conclusions from unreasonable inputs in light of 21. Following 22, he suggests that AI developers have an affirmative duty to provide relatively inexpensive, harmless, and simple input error-checking techniques, but notes that these rules may not apply where the AI program is performing a function in which errors in input may be directly life-threatening (e.g., administering medicine to a patient); in such cases, he suggests applying the rules relating to “ultra-hazardous activities and instrumentalities.”
  3. According to Cole, AI systems must be aware of their limitations, and this information must be communicated to the buyer. It is well established that vendors have a duty to inform buyers of any known flaws; however, how can unknown weaknesses or flaws be identified and communicated?
  • Breach of Warranty

If an AI system is a product, it must be sold with a warranty; even if there is no express warranty provided by the vendor (or purchased by the user), there is an implied warranty that it is “satisfactory as described & fit for a reasonable time,” to use a phrase from the UK Sale of Goods Act 1979. Some jurisdictions allow contract clauses to void implied warranties; however, when an AI system is purchased built into other goods (such as a car), it appears unlikely that any such contractual exclusions (e.g., between the manufacturer of the car and the supplier of the AI software) could successfully be passed on to the purchaser of the car.

  • Conclusion

It has been established that the legal liability of artificial intelligence systems is determined by at least three factors:

1. Whether artificial intelligence is a product or a service. This is a grey area in the law, with various commentators offering opposing viewpoints.

2. What mens rea is required if a criminal offence is being considered? It appears unlikely that AI programs will violate laws that require knowledge of a criminal act being committed; however, it is very possible that they will violate laws that “a reasonable man would” violate have known’ that a particular course of action could result in an offence, and it is almost certain that they would violate strict liability laws.

3. Whether or not the limitations of AI systems are communicated to the buyer. Because AI systems have both general and specific limitations, legal cases concerning such issues may be based on the specific wording of any warnings about such limitations.

There is also the issue of who should be held accountable. It will be determined by which of Hallevy’s three models (perpetrator-by-another, natural-probable-consequence, or direct liability) applies:

• In a perpetrator-by-another-offence case, the person who instructs the AI system – either the user or the programmer – is likely to be held accountable.

• In a natural-or-probable-consequence offence, liability may fall on anyone who could have predicted the product being used in the manner it was; the programmer, the vendor (of a product), or the service provider.23

  Unless the instructions that came with the product/service spell out the limitations of the system and the possible consequences of misuse in unusual detail, the user is less likely to be blamed.

•AI programs may also be held liable for strict liability offences, in which case the programmer will almost certainly be found to be at fault.

 However, in all cases where the programmer is found to be liable, there may be further disagreements as to whether the fault lies with the programmer, the program designer, the expert who provided the knowledge, or the manager who appointed the incompetent expert, program designer, or programmer.

                                                Reference

  1. Pompeii Estates, Inc. v. Consolidated Edison Co., 397N.Y.S.2d 577, 580(N.Y. Civ.
    Ct. 1977).
  2. Cariad Hayes, Artificial Intelligence: The Future’s Getting Closer, AM. LAW., Nov.
    1988, at 115. Hayes points out that certain “software programs . . .in their complexity . . .
    imitate certain processes of the human brain.” Id.
  3. Artificial intelligence is a general term used to describe that aspect of computer science “concerned with understanding the nature of intelligent action and constructing computer
    systems capable of such action.” Allen Newell, Artificial Intelligence, in 2 McGRAw-HI.t.
    ENCYCLOPEDIA OF SCIENCE AND TECHNOLOcY120 (Sybil P. Parker ed., 7th ed. 1992). “A
    machine has artificial intelligence when there is no discernible difference between the conversation generated by the machine and that of an intelligent person.” ALANFREEDMAN, THE
    COMPUTER GLOSSARY: THE COMPLETE ILLUSTRATED DESK REFERENCE 12 (5th ed. 1991)
    (quoting Alan Turing)
  4. “AskDan,” a program from Legal Knowledge Systems Inc., does tax calculations and
    identifies tax loopholes. Lance B. Eliot, Mass Market Applications: They’re Here, Al EXPERT,
    Dec. 1989, at 9
  5. MYCIN, developed at Stanford University, is used by physicians to diagnose bacterial blood infections. Yi-Tzuu Chien & Jay Liebowitz, Artificial Intelligence, in 2 EN: YCLOPEDIA OF PHYSICAL SCIENCE AND TECHNOLOGY 1, 14 (Robert A. Meyers ed., 1987). FamilyCare Software from Lundin Laboratories, Inc. gives pediatric advice. Eliot, supra note 5, at 9
  6. XCON was developed by Carnegie-Mellon University for use in configuring Digital
    Equipment Corporation’s VAX computers. Chien & Liebowitz, supra note 6, at 15.
  7. Laurence H. Reece III, Defective Expert Systems Raise Personal Injury Liability
    Issues, NAT’L L.J., Oct. 12, 1987, at 24.
  8. It has been reported that a patient died from excess radiation when a computer-controlled radiation-therapy machine malfunctioned. In another incident, payment was stopped
    on checks which, due to a software error, paid bondholders $4 million in excess interest. In
    another instance, twenty sailors were killed due to a problem with a computer-controlled air-defence missile system that malfunctioned as a result of an error. Bob Davis, Costly Bugs,
    WALL ST. J., Jan. 28, 1987, at Al; L. Nancy Birnbaum, Strict ProductsLiability and Computer Software, 8 COMPUTER/L.J. 135, 144 n.64 (1988). See also Bev Litulewood & Lorenzo
    Strigini, The Risks of Software, Sci. AM., Nov. 1992, at 62, 62-63.
  9. Michael C. Gemignani, Product Liability and Software, 8 RUTGERS J. COMPuTERS, TECH. & L. 173 (1981)
  10. “No complex computer program has ever been marketed that did not have some
    defect, somewhere.” Hayes, supra note 3, at 115 (quoting Robert Cesari). But see Littlewood
    & Strigini, supra note 9, at 62 (stating that, in theory, design faults in a program could be
    detected and removed).
  11. Hallevy G.: The Criminal Liability of Artificial Intelligence entities. http://ssrn.com/
    abstract=1564096 (15 February 2010).
  12. Morrisey v. State, 620 A.2d 207 (Del.1993); Conyers v. State, 367 Md. 571, 790 A.2d 15
    (2002); State v. Fuller, 346 S.C. 477, 552 S.E.2d 282 (2001); Gallimore v. Commonwealth,
    246 Va. 441, 436 S.E.2d 421 (1993)
  13. Weng Y-H, Chen C-H and Sun C-T: Towards the Human-Robot Co-Existence Society: On
    Safety Intelligence for Next-Generation Robots, 1 Int.J.Soc.Robot. 267, 273 (2009).

  14. United States v. Powell, 929 F.2d 724 (D.C.Cir.1991).
  15. Francis Bowes Sayre: Criminal Responsibility for the Acts of Another, 43 Harv. L. Rev. 689
    (1930).
  16. Tuthill G.S.: Legal Liabilities and Expert Systems, AI Expert (Mar. 1991).
  17. Gerstner M.E.: Comment, Liability Issues with Artificial Intelligence Software, 33 Santa
    Clara L. Rev. 239. http://digitalcommons.law.scu.edu/lawreview/vol33/iss1/7 (1993).
  18. Ransome v. Wisconsin Elec. Power Co., 275 N.W.2d 641, 647-48. Wis. (1979).
  19. Cole G.S.: Tort Liability for Artificial Intelligence and Expert Systems, 10 Computer L.J.
    127 (1990).
  20. Restatement (Second) of Torts: Section 552: Information Negligently Supplied for the
    Guidance of Others. (1977).
  21. Independent School District No. 454 v. Statistical Tabulating Corp 359 F. Supp. 1095. N.D.
    Ill. (1973).
  22. Stanley v. Schiavi Mobile Homes Inc., 462 A.2d 1144. Me. (1983).
  23. “Artificial Intelligence and Legal Liability” J.K.C. Kingston (2016)

Legal Risks of Artificial Intelligence (Author: Anar Mahmudov)

Artificial intelligence (AI) is everywhere and its development, deployment and use are moving forward rapidly and contributing to the global economy. AI has many benefits such as improvements in creativity, services, safety, lifestyles, helping solve problems and yet at the same time, raises many concerns such as adverse impacts on human autonomy, privacy, and fundamental rights and freedoms (OECD 2019).

The legal discourse on the legal and human rights issues of artificial intelligence (AI) is established, with many a detailed legal analysis of specific individual issues. But this field is a regulatory moving target and there is a need for an exploratory, and looking at the breadth of issues, curated in a single place. Critically missing also is a greater discussion and mapping of vulnerability to such issues.

The essay’s main research question is: What are the main legal risks related to AI?

 Discrimination and biased AI systems

Artificial intelligence systems have the potential to reinforce pre-existing human biases. A machine has no predetermined concept of right and wrong, only those which are programmed into it. A system that can learn for itself and act in a way unforeseen by its creators, may act contrary to its original intentions. While the big data on which AI is based is extensive, it is neither complete nor perfect. This imperfect data feed algorithms and AI, and can ‘bake discrimination into algorithms. As a result, human biases will be accentuated and not resolved.

The truth is that many AI devices are better than human beings at identifying small differences. However, algorithms and machine learning may also develop false correlations between appearance, origin or other human attributes, that replicate and extend discriminatory practices. Several recent controversies have illustrated this type of bias in a particularly shocking way. In 2015, Google Photos, a face recognition software, caused an uproar when two young African Americans realized that one of their photos had been tagged as ‘Gorillas’. The algorithms and model’s result may be incorrect or discriminatory if the training data renders a biased picture reality, or if it has no relevance to the area in question. Such use of personal data would be in contravention of the fairness principle.

Unfairness, bias and discrimination repeatedly pop up as issues and have been identified as a major challenge related to the use of algorithms and automated decision-making systems, e.g., to make decisions related to health, employment, credit, criminal justice, and insurance. For instance, in August 2020, protests were made and legal challenges are expected over the use of a controversial exam algorithm used to assign grades to GCSE students in England.

A focused paper from the EU Agency for Fundamental Rights outlines the potential for discrimination against individuals via algorithms and states that “the principle of non-discrimination, as enshrined in Article 21 of the Charter of Fundamental Rights of the European Union, needs to be taken into account when applying algorithms to everyday life” (FRA 2018). It cites examples with potential for discrimination: automated selection of candidates for job interviews, use of risk scores in creditworthiness or in trials. European Parliament report on the fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law-enforcement, European Parliament stressed that “because of the data sets and algorithmic systems used when making assessments and predictions at the different stages of data processing, big data may result not only in infringements of the fundamental rights of individuals but also in differential treatment of and indirect discrimination against groups of people with similar characteristics, particularly with regard to fairness and equality of opportunities for access to education and employment, when recruiting or assessing individuals or when determining the new consumer habits of social media users”.

It is, therefore, clear that AI systems may create inequalities and discrimination, thus posing a legal risk.

   Liability for harms

The deployment and use of AI technologies possess some risks which can cause damage to persons and property. For example, running over pedestrians by driverless cars, crashing and damage caused by a partially operated drone, wrongful medical treatment diagnosis by an AI software programme. As there are many parties involved in an AI system (data provider, designer, manufacturer, programmer, developer, user and AI system itself), liability is difficult to establish when something goes wrong and there are many factors to be taken into consideration.

As outlined by the Assessment List for Trustworthy AI (ALTAI), accountability calls for mechanisms to be put in place to ensure responsibility for the development, deployment and use of AI systems – risk management, identifying and mitigating risks in a transparent way that can be explained to and audited by third parties. Accountability in AI requires both the function of guiding action (by forming beliefs and making decisions) and the function of explanation (by placing decisions in a broader context and by classifying them along with moral values). The accountability gap is a worse problem than it might first seem to cause problems in three areas: causality, justice, and compensation. According to the Privacy International and Article 19 report, even when potential harm is found, it can be difficult to ensure accountability for violations of those responsible.

   Affected human rights principles

International human rights treaties lay down obligations that their signatories are bound to respect and fulfil. States must refrain from interfering with rights and take positive actions to fulfil their enjoyment. While none of them currently explicitly apply or mention ‘Artificial Intelligence or machine learning, their broad and general scope would cover most of the risks and challenges identified.

                  AI Legal riskHuman rights principles that might be affected
                     Cybersecurity vulnerabilitiesthe right to privacy; freedom of expression and the free flow of information  
                 Unfairness, bias and discrimination  elimination of all forms of discrimination against women; equal rights of men and women; enjoyment of children’s rights without discrimination; equality before the law, equal protection of the law without discrimination; enjoyment of prescribed rights without discrimination; non-discrimination, right to life of migrant workers; right to liberty and security of the person; prohibition of discrimination on the basis of disability; right to a fair trial; right to freedom from discrimination  
              Privacy and data protection issues  migrant’s right to privacy; respect for the privacy of the person with disabilities; right to respect for private and family life; right to privacy and data protection; children’s privacy; protection of the integrity of older persons and their privacy and intimacy  
  Liability issues related to damage caused and Lack of accountability for harms    right to life; right to effective remedies

             Vulnerability

It is not enough to simply outline the legal issues, gaps and challenges and the human rights principles AI implicates. The frame of ‘vulnerability’ help consolidates the identification of critical areas of concern and guide legal AI risks and impact mitigation efforts to better protect human and societal well-being.

More specifically, in human rights discourse, for instance, the term vulnerability is used to indicate a heightened susceptibility of certain individuals or groups to being harmed or wronged by others or by the state. Populations that are particularly prone to be harmed, exploited or discriminated include, among others, children, women, older people, people with disabilities, and members of ethnic or religious minority groups. It does not mean that these groups are being elevated above others. Characterizing them as ‘vulnerable’ simply reflects the hard reality that these groups are more likely to encounter discrimination or other human rights violations than others – as all of these categories are implicated in some form or manner in the legal issues and human rights principles at stake.

The use and deployment of AI technologies disproportionately affect vulnerable groups. E.g., The UNESCO COMEST Preliminary Study on the Ethics Of Artificial Intelligence gives an example of the Allegheny Family Screening Tool (AFST), a predictive model used to forecast child neglect and abuse. It states that it “exacerbates existing structural discrimination against the poor and has a disproportionately adverse impact on vulnerable communities” via oversampling of the poor and using proxies to understand and predict child abuse in a way that inherently disadvantages poor working families. There are also some concerns about “increasingly relying on technology to collect personal data of vulnerable people such as migrants and refugees,” to “create additional bureaucratic processes that could lead to exclusion from protection.” For instance, children are particularly vulnerable. As, the Initial Coin Offering (ICO) explains, “they may be less able to understand how their data is being used, anticipate how this might affect them, and protect themselves against any unwanted consequences”. AI-powered data-driven and intensive economies might be more lucrative or attractive targets for cyber attacks given their expansive use of, and dependence on AI and big data.

List of References

  • Accuracy and fairness for juvenile justice risk assessments.

Journal of Empirical Legal Studies (2019)

  • Artificial intelligence in cybersecurity is vulnerable.

SC Magazine (2019).

https://www.scmagazine.com/home/opinion/artifical-intelligence-in-cybersecurity-is-vulnerable/

  • Artificial intelligence: privacy and legal issues.

CPO Magazine (2018)

https://www.cpomagazine.com/data-privacy/artificial-intelligence-privacy-and-legal-issues/

  • Council of Europe (2018) – Discrimination, artificial intelligence, and algorithmic decision-making, Study by Prof. Frederik Zuiderveen Borgesius, Professor of Law, Institute for Computing and Information Sciences (iCIS), Radboud University Nijmegen, and Researcher at the Institute for Information Law, University of Amsterdam (the Netherlands). 

https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73.

  • European Parliament (2017) Resolution of 14 March 2017 on fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law-enforcement.
  • Human dignity of the vulnerable in the age of rights, Springer, Cham (2016), pp. 257-272
  • Legal and human rights issues of AI: Gaps, challenges and vulnerabilities.

Journal of Responsible Technology. Volume 4, December 2020, 100005

https://www.sciencedirect.com/science/article/pii/S2666659620300056#:~:text=Such%20issues%20include%3A%20algorithmic%20transparency,damage%20and%20lack%20of%20accountability

  • Liability for Artificial Intelligence and other emerging digital technologies. 
  • On the legal responsibility of autonomous machines,

Artificial Intelligence and Law, 25 (3) (2017), p. 293

  • Policing by machine, Predictive Policing and the threats to our rights. H. Couchman (2019)
  • Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law

Common Market Law Review, 55 (4) (2018), pp. 1143-1185

Extra-contractual Liability in Robotics

The European Parliament adopted a resolution with suggestions in 2017 (Parliament’s Resolution) inviting the European Commission to present a proposal for a directive on civil law rules for robots, paving the door for a specific “Law of the Horse Robot.” In its second paragraph, the resolution boldly states that because humankind stands on the threshold of an era when ever more sophisticated robots, bots, and other manifestations of artificial intelligence (‘AI’) seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider its legal and ethical implications and effects, without stifling innovation. In particular, the resolution asserts that existing liability laws are insufficient in certain instances, such as when a robot can learn and make autonomous judgments, and when machines directly conclude and implement contracts.
Defining Artificial Intelligence and Robot. Artificial Intelligence (AI) can be broadly split into three types: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). Today’s AI deployment falls under the category of ANI, which can be defined as the ability of machines to resemble human capabilities in narrow domains. AGI, in contrast, is the holy grail of current AI research. An AGI would have human capabilities across a number of domains. As such, it would be a true human-equivalent AI. ASI is a whole different category altogether. It is an intelligence that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This is the level of AI that worries people. Once AGI is accomplished, and especially with ASI, the question of whether the AI has reached consciousness, or self-awareness, will arise, prompting both ancient and modern debates about what it means to be “human,” to have free choice, and to be entitled to complete, constitutional personhood. Artificial intelligence will eventually transform into artificial life. However, these specific difficulties, as well as ASI’s unpredictable impact on society, are outside the purview of this study.
In some ways, robots are the physical embodiment of AI. AI can generate information since it is solely computational, but it cannot physically impact its environment. It’s just a phantom without a shell. For the purposes of legal investigation, the sense-think-act definition of robots is a typical one. The Parliament’s Resolution, too, recommends establishing a common definition and classification of “smart robots,” based on the sense-think-act characteristics of i. the capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data; ii. the capacity to learn through experience and interaction; iii. the form of the robot’s physical support; iv. the capacity to adapt its behavior and actions to the environment. The Parliament’s definition also captures the essence of what “think” actually means. It is the ability to learn, to the point that a robot’s behavior cannot be predicted in advance because the outcome will be determined by the robot’s “thinking” process. A robot following the rule “turn left when forward motion is blocked” may detect the impediment and act on it, but it does not consider what it has detected. It has already been programmed. Thinking necessitates the robot’s ability to analyze what it has sensed in order to determine how to respond.
Some features of robots, such as embodiment, emergence and social valence are exceptional for the current legal system. Embodiment is the consequence of a physical body being able to act on information and directly affect the world at large. The embodiment represents a challenge for the law because the intangible (i.e., information) generally does not give rise to actions under product liability. the U.S. case of Winter v. G.P. Putnam’s Sons, in which the court held that two mushroom pickers who were poisoned as a result of relying on wrong information in an encyclopedia on mushrooms could not claim damages from the publisher based on product liability. Product liability requires a defect in a tangible product. However, the physical product, i.e., the book, was not defective; its intangible content was. The embodiment muddies the border between informing and acting. The book in Winter merely informed. It did not act; it was the humans that picked and prepared the mushrooms. Embodied information can in some cases give rise to a product liability claim. As the court in Winter states, aeronautical charts have been held to be products for the purpose of product liability law. Ultimately, however, the court differentiates between aeronautical charts and books such as the encyclopedia by drawing an analogy between charts and a compass, explaining that both are tools that may be used to guide an individual where knowledge of natural features is required. In contrast, a book such as the mushroom encyclopedia is rather “as a book on how to use a compass or an aeronautical chart” and as such represents “pure thought and expression,” i.e., the intangible. We can imagine a mushroom picking robot with ANI capabilities. It identifies and picks edible mushrooms. Alas, it mistakenly picks a poisonous mushroom for the family it serves, who, after partaking of the tasty but ill-fated dinner, suffers severe discomfort. If a court likens an aeronautical chart to a tool for product liability purposes, it is not a stretch to believe that the same might occur for a robot that faultily picks a poisonous mushroom—at least regarding the embodiment of information.
Emergence, in this case, is used nearly the same term as autonomy, however emergence is the intelligent, higher-level thinking displayed by robots. The Parliament’s Resolution defines a robot’s autonomy as “the ability to make decisions and implement them in the outside world, independently of external control or influence; whereas this autonomy is of a purely technical nature and its degree depends on how sophisticated a robot’s interaction with its environment has been designed to be.” The extraordinary bridges built by army ant colonies are perfect examples of emergent higher-level behavior. An individual ant will (i) slow down when it reaches a gap it cannot cross, and (ii) freeze in place as soon as a still faster-moving ant from behind walks over it. The individual ant’s behavior is very simple, but the resultant bridges of interlinked ant bodies are extraordinarily complex. Swarms of robots have been programmed to exhibit similar behavior. The ant army example reveals the difference between autonomy and emergence. The individual ant acts autonomously while the colony displays emergent, higher-level qualities. The question is then: When does a robot pass the threshold from tool to autonomous action? We distinguish between tools used by humans and tools (machines) deployed by humans. Deployed tools can operate in circumstances unforeseen by the machine’s creators precisely because they are autonomous. As will be discussed below, autonomy makes it difficult to ask to whom liability shall fall for harm caused by an autonomously acting robot. Autonomy also leads us back to the sense-think act paradigm mentioned as part of the definition of “smart robot.” Consequently, we believe that the autonomy of robots is more important from a legal perspective than emergence is. Emergence still plays a role, but in our opinion rather with regards to AI and robots being generative technologies, with multiple layers of hardware, software, and protocols embedded within a social layer.
Social valence is the fact that robots elicit emotional responses from people in a social context. Put differently, we anthropomorphize robots. Anthropomorphizing robots indicates that people are willing to substitute [robots] for animals or human beings in certain contexts and for certain purposes. The substitution of a robot for a living thing may be innocent, emotional, almost instinctual. The patient who blames a surgical robot for botched procedure projects partial humanity – and hence responsibility – onto the technology. The soldier who mourns the loss of his bomb-disarming robot projects onto the robot-human qualities of comradeship, courage, and commitment to fellow soldiers. When a companion robot who operates in our home sends personal data about us to a corporation, we feel betrayed, when we would never think that a camera and a microphone could betray us. The consequence is that a layperson might “feel” that the law should hold the robot liable. And, as the rights of minorities show, the law frequently develops by societal demands.
The extra-contractual liability of the robot
The Parliament’s Resolution foresees that a robot’s autonomy will cause difficulty in attributing liability for the robot’s actions because: Whereas the more autonomous robots are, the less they can be considered to be simple tools in the hands of other actors (such as the manufacturer, the operator, the owner, the user, etc.); whereas this, in turn, questions whether the ordinary rules on liability are sufficient or whether it calls for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of robots where the cause cannot be traced back to a specific human actor and whether the acts or omissions of robots which have caused harm could have been avoided.
Solving this problem is seen as critical by the Parliament, which suggests as solutions to investigate: a compulsory insurance scheme (such as currently exists for cars); a compensation fund for damages not covered by any insurance; the consideration of limited liability for the actors involved in the “making” of the robot as a benefit for contributing to a compensation fund; a robot registry with all relevant details on liability (similar to companies registry); and even a specific legal status/personhood for certain categories of robots, which would elevate the robot to being able to be held directly liable. In order to understand whether the Parliament’s suggestions are necessary, we will look at how the law handles extra-contractual liability where one autonomous being acts on behalf of or under the direction of another, namely regarding agents, minors and others under supervision, slaves, and animals. Firstly, some general principles of tort law. As tort law differs considerably from jurisdiction to jurisdiction, we will use the Principles on European Tort Law (PETL) as a basis against which to assess the impact robots may have on tort law, though with some “peeks” at German law in order to anchor the principles to a concrete jurisdiction or where the PETL do not address the issue. The PETL was drafted by the European Group on Tort Law (EGTL), a network of academics that used hypotheticals and country and comparative reports to identify principles common to both common law and civil law European jurisdictions. Liability requires the elements of damage to a legally protected interest; causation for the damage by an activity, which could be either an act or omission; and a recognized base for the liability. Causation and the base of liability are the relevant elements for the purposes of this section; damage will generally not be an issue. Causation is given where activity is conditio sine qua non for the damage, i.e., where the damage would not have occurred in the absence of the activity. As this element is too encompassing, it is limited by requiring a base of liability. The bases are fault; an abnormally dangerous activity; and auxiliary acting on behalf of the person deemed liable, or, more generally, being in charge of another. “Fault” is the intentional or negligent breach of the required standard of conduct; this can also encompass product liability resulting from any deviation from standards that are reasonable to be expected. With regards to product liability, the burden of proof is reversed; the manufacturer must prove that the required standard of conduct was met to not be liable. An abnormally dangerous activity gives rise to strict liability; an activity is deemed abnormally dangerous where there is a foreseeable and highly significant risk of damage even when all due care is given, and where the activity is not a common one.
To whom liability shall fall. Public policy touches upon a vastly wider realm of issues than just law. Regulation can guide societal developments, enabling and encouraging some while disabling others. This broader approach also applies to the liability policy. From a societal perspective, liability generally has two goals: a corrective function of remedying harms and an incentive function of preventing harm. The corrective function can fail by misdirecting liability, i.e., attributing it to the wrong actor. Harm may be over-corrected insofar that strict liability will be pushed onto the manufacturer of the robot, regardless of whether the harm is one that could be expected and whether the manufacturer met their duty of care. It may also be under-corrected in that harm may end up uncompensated, for example, because it is impossible to determine specifically what led to the harm or under who’s charge the robot was acting. On the incentive side, an excessive emphasis on preventing harm beyond the reasonable could chill innovation. Our investigation above of general tort principles revealed some actors to whom liability could fall: to the robot itself, to the manufacturer, or to the owner/supervisor. In addition, it could also fall to no one; the damage would be fully sustained by the harmed. Which of these possibilities (and there are many more) is the most suited must be assessed against best achieving liability’s two goals? Let us start with the last-mentioned possibility of not holding anyone liable. This approach would fail both to correct any harms or incentivize the reduction of harms. As such, it should be rejected. Liability falling to the robot itself would require that it be granted legal personhood. There is no hindrance to such a step. However, in order to be able to correct any harm, a robot would need a source of capital in order to pay damages—here, a personal fund tied to the robot could be one solution. Depending on its purpose, it could also serve the harmed party instead of paying monetary damages. Holding the robot itself liable could also meet the incentive function. For this, the robot would need to learn from the situations where it is held liable and adjust its behavior accordingly. Liability may also be accorded to the owner/supervisor of the robot. This approach would mirror more closely that of how the law currently responds to harms by autonomous entities, such as animals and agents. However, a user will rarely have insight into the robot’s working as would the manufacturer, which is an argument against this approach. Moreover, while the goal of remedying harms would be met, the incentive function would only be met where the market leads to such an outcome, either through competition among manufacturers or because robot owners would have some sort of recourse against the manufacturers. Picking upholding the manufacturer liable, we need to differentiate between the manufacturer of the robot and, where distinct, potentially the manufacturer of the AI as a component of the robot. Recognizing AI as a component is key for this approach. It is the robot manufacturer that combines and integrates the disparate parts into something that is more than their sum. It is also the robot manufacturer that is best situated to fully understand the complexities of the robot. The AI manufacturer will not necessarily control how the AI will be integrated into the robot. Should the robot manufacturer be able to determine that it was the AI component that was the cause for any harm, it could still seek contractual regress. In the EU, current product liability laws would make the manufacturers of the robot and of the AI jointly and severally liable. Despite the complexities of the robot system and the difficulties of other parties in determining the cause for harm, it seems that existing liability patterns (e.g., strict liability or product liability) might be able to achieve the goals of harm correction and reduction that traditionally define the teleological horizon of the civil liability regimes.

References:
1. Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513 (2015)https://digitalcommons.law.uw.edu/cgi/viewcontent.cgi?article=1022&context=faculty-articles
2. European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html
3. Wilhelm WINTER; Cynthia Zheng, Plaintiffs-Appellants, v. G.P. PUTNAM’S SONS, Defendant-Appellee. No. 89-16308. United States Court of Appeals, Ninth Circuit. 1991 https://h2o.law.harvard.edu/cases/5449
4. European Group on Tort Law – Principles of European Tort Law – Text and Commentary. 2005 https://drive.google.com/file/d/11gaEReh1kg4SA04kWIBWVa3iFbR1B1Jd/view?usp=sharing