Introduction
“Computers can only issue mandatory instructions-they are not programmed to exercise discretion.” 1
This statement was made by a New York court fifty-five years ago, and it is no longer strictly true. What was once the stuff of science fiction is now science fact: computers capable of solving problems by replicating human thought processes.2 ‘As a result of the use of artificial intelligence (hereinafter AI)3,’ programs that provide tax advice4,’ diagnose medical conditions5,” and configure computers6 are now available.”With continued research, the scope of Al programs will broaden 7, and as it penetrates markets for critical services and processes, the risk of disaster will likely increase 8.’ When conventional software programs have resulted in commercial jet near-misses, nuclear power plant shutdowns, and a missile alert triggered by a false indication of a world war 9, “Additional issues are easily foreseen if a program mimicking human thought is used. Furthermore, as systems become more complex, failure becomes unavoidable. 10” Who is responsible if such a failure occurs? The answer is ambiguous.
The impact of an AI program error has not yet been considered by the courts, and no case law exists to provide guidance to the software developer, vendor, or user regarding their potential liability. Furthermore, there are no laws addressing this issue. This comment proposes a liability standard that eliminates this uncertainty.
This comment first discusses the differences between conventional and AI software, 2 including the subset of AI known as expert systems, ” and then presents various liability theories and how they relate to Al. Finally, a proposal for applying a strict liability standard to AI will be presented.
2. Criminal liability in artificial intelligence
The following references are primarily to US law; however, many other jurisdictions have similar legislation in the relevant areas.
Gabriel Hallevy discusses whether and how artificial intelligent entities could be held criminally liable in 11. Criminal laws typically require both an actus reus (an action) and a mens rea (a mental intent), and Hallevy categorizes laws as follows:
1. Those in which the actus reus is an action, and those in which the actus reus is a failure to act;
2. Those in which the mens rea requires knowledge or being informed; those in which the mens rea only requires negligence (“a reasonable person would have known”); and strict liability offences in which no mens rea is required.
Hallevy then proposes three legal models for considering AI-related offences:
- Perpetrator through another. If a mentally deficient person, a child, or an animal commits an offence, the perpetrator is considered an innocent agent because they lack the mental capacity to form a mens rea (this is true even for strict liability offences). However, if the innocent agent was directed to attack someone by another person (for example, if the owner of a dog-directed his dog to attack someone), the instructor is held criminally liable (see 12 for US case law).
According to this model, AI programs could be considered an innocent agent, with the software programmer or the user being considered the perpetrator-via-another.
- Natural-probable-result. In this model, a portion of the AI program designed for good is activated inappropriately and commits a criminal act. Hallevy provides an example (quoted from 13) of a Japanese employee of a motorcycle factory being killed by an artificially intelligent robot working nearby. The robot incorrectly identified the employee as a threat to its mission and calculated that pushing him into an adjacent operating machine would be the most efficient way to eliminate this threat. The robot smashed the surprised worker into the machine with its very powerful hydraulic arm, killing him instantly, and then resumed its duties.
The usual legal application of “natural or probable consequence” liability is to prosecute co-conspirators in a crime. If no conspiracy can be established, it is still possible (under US law) to hold an accomplice legally liable if the perpetrator’s criminal acts were a natural or probable consequence (a phrase coined by 14) of a scheme that the accomplice encouraged or aided 15, as long as the accomplice was aware that some criminal scheme was in progress.
As a result, users or (more likely) programmers may be held legally liable if they were aware that a criminal offence was a natural, probable result of their programs/use of an application. However, the application of this principle must distinguish between AI programs that ‘know’ that a criminal scheme is underway (i.e., they were programmed to perform a criminal scheme) and those that do not (they were programmed for another purpose). It is possible that crimes requiring the knowledge for the mens rea cannot be prosecuted for the latter group of programs (but those requiring a ‘reasonable person’ mens rea, or strict liability offences, can).
3. Direct liability. This model assigns an AI system both actus reus and mens rea.
Attributing an actus reus to an AI system is a simple process. The actus reus of an offence occurs when a system takes an action that results in a criminal act or fails to take an action when there is a duty to act.
Assigning a mens rea is much more difficult, and it is here that the three levels of mens rea come into play. It may be possible to hold AI programs criminally liable for strict liability offences, where no intent to commit an offence is required. Taking self-driving cars as an example, speeding is a strict liability offence; thus, according to Hallevy, if a self-driving car is found to be exceeding the speed limit for the road it is on, the law may well assign criminal liability to the AI program that was driving the car at the time.
This possibility raises a number of other issues that Hallevy addresses, such as defences (could a malfunctioning program claim a defence similar to the human defence of insanity? Or, if it is infected with an electronic virus, could it invoke defences akin to coercion or intoxication? ); and punishment (who or what would be punished for an offence for which an AI system was directly liable?).
- Civil liability in AI
- Negligence
When software fails or a party is injured as a result of using software, the resulting legal proceedings usually allege negligence rather than criminal liability 16. Gerstner 17 discusses the three elements that must typically be demonstrated in order for a negligence claim to succeed:
1. The defendant owed the plaintiff a duty of care; 2. The defendant breached that duty; and 3. The breach caused the plaintiff harm.
Regarding point 1, Gerstner suggests that there is little doubt that a software vendor owes a duty of care to the customer, but determining what standard of care is owed is difficult.
On point 2, Gerstner suggests several ways in which an AI system could violate the duty of care: errors in the program’s function that could have been detected by the developer; an incorrect or inadequate knowledge base; incorrect or inadequate documentation or warnings; failing to keep the knowledge up to date; the user supplying faulty input; the user relying unduly on the output; or using the program for an incorrect purpose.
In terms of point 3, the question of whether an AI system can be considered to have caused an injury is also debatable. Perhaps the most important question is whether the AI system recommends action in a given situation (as many expert systems do) or takes action (as self-driving and safety-equipped cars do) Causation is difficult to prove in the former case because there must be at least one other agent involved; in the latter case, it is much easier.
Gerstner also discussed a legal exception for “strict liability negligence” in the United States. This applies to products that cause injury when used in a normal, intended, or reasonably foreseeable manner and are defective or unreasonably dangerous (as opposed to economic loss). She debates whether the software is a ‘product’ or merely a ‘service,’ citing a case in which electricity was held to be a product 18, and thus leans toward defining software as a product rather than a service. Assuming that software is a product, it is the responsibility of AI system developers to ensure that their systems are free of design flaws, manufacturing flaws, and inadequate warnings or instructions.
Cole 19 goes into greater detail about whether the software is a product or a service. His conclusion is that treating AI systems as products is “partially applicable at best,” and he prefers to think of AI as a service rather than a product; however, he acknowledges that the law in this area is ambiguous.
Cole cites some case law on the “duty of care” that AI systems must uphold:
- In 20, a school district filed a negligence claim against a statistical bureau that (allegedly) provided inaccurate calculations of the value of a burned-down school, resulting in an underinsured loss for the school district. The duty under consideration was the obligation to provide information with reasonable care. The court considered factors such as: the existence, if any, of a guarantee of correctness; the defendant’s knowledge that the plaintiff would rely on the information; the limitation of potential liability to a small group; the lack of proof of any correction once discovered; the unfavourability of requiring an innocent party to bear the burden of another’s professional mistakes; and, the promotion of cautionary techniques among informational (tool) providers.
- Cole discusses the duty to draw reasonable conclusions from unreasonable inputs in light of 21. Following 22, he suggests that AI developers have an affirmative duty to provide relatively inexpensive, harmless, and simple input error-checking techniques, but notes that these rules may not apply where the AI program is performing a function in which errors in input may be directly life-threatening (e.g., administering medicine to a patient); in such cases, he suggests applying the rules relating to “ultra-hazardous activities and instrumentalities.”
- According to Cole, AI systems must be aware of their limitations, and this information must be communicated to the buyer. It is well established that vendors have a duty to inform buyers of any known flaws; however, how can unknown weaknesses or flaws be identified and communicated?
- Breach of Warranty
If an AI system is a product, it must be sold with a warranty; even if there is no express warranty provided by the vendor (or purchased by the user), there is an implied warranty that it is “satisfactory as described & fit for a reasonable time,” to use a phrase from the UK Sale of Goods Act 1979. Some jurisdictions allow contract clauses to void implied warranties; however, when an AI system is purchased built into other goods (such as a car), it appears unlikely that any such contractual exclusions (e.g., between the manufacturer of the car and the supplier of the AI software) could successfully be passed on to the purchaser of the car.
- Conclusion
It has been established that the legal liability of artificial intelligence systems is determined by at least three factors:
1. Whether artificial intelligence is a product or a service. This is a grey area in the law, with various commentators offering opposing viewpoints.
2. What mens rea is required if a criminal offence is being considered? It appears unlikely that AI programs will violate laws that require knowledge of a criminal act being committed; however, it is very possible that they will violate laws that “a reasonable man would” violate have known’ that a particular course of action could result in an offence, and it is almost certain that they would violate strict liability laws.
3. Whether or not the limitations of AI systems are communicated to the buyer. Because AI systems have both general and specific limitations, legal cases concerning such issues may be based on the specific wording of any warnings about such limitations.
There is also the issue of who should be held accountable. It will be determined by which of Hallevy’s three models (perpetrator-by-another, natural-probable-consequence, or direct liability) applies:
• In a perpetrator-by-another-offence case, the person who instructs the AI system – either the user or the programmer – is likely to be held accountable.
• In a natural-or-probable-consequence offence, liability may fall on anyone who could have predicted the product being used in the manner it was; the programmer, the vendor (of a product), or the service provider.23
Unless the instructions that came with the product/service spell out the limitations of the system and the possible consequences of misuse in unusual detail, the user is less likely to be blamed.
•AI programs may also be held liable for strict liability offences, in which case the programmer will almost certainly be found to be at fault.
However, in all cases where the programmer is found to be liable, there may be further disagreements as to whether the fault lies with the programmer, the program designer, the expert who provided the knowledge, or the manager who appointed the incompetent expert, program designer, or programmer.
Reference
- Pompeii Estates, Inc. v. Consolidated Edison Co., 397N.Y.S.2d 577, 580(N.Y. Civ.
Ct. 1977). - Cariad Hayes, Artificial Intelligence: The Future’s Getting Closer, AM. LAW., Nov.
1988, at 115. Hayes points out that certain “software programs . . .in their complexity . . .
imitate certain processes of the human brain.” Id. - Artificial intelligence is a general term used to describe that aspect of computer science “concerned with understanding the nature of intelligent action and constructing computer
systems capable of such action.” Allen Newell, Artificial Intelligence, in 2 McGRAw-HI.t.
ENCYCLOPEDIA OF SCIENCE AND TECHNOLOcY120 (Sybil P. Parker ed., 7th ed. 1992). “A
machine has artificial intelligence when there is no discernible difference between the conversation generated by the machine and that of an intelligent person.” ALANFREEDMAN, THE
COMPUTER GLOSSARY: THE COMPLETE ILLUSTRATED DESK REFERENCE 12 (5th ed. 1991)
(quoting Alan Turing) - “AskDan,” a program from Legal Knowledge Systems Inc., does tax calculations and
identifies tax loopholes. Lance B. Eliot, Mass Market Applications: They’re Here, Al EXPERT,
Dec. 1989, at 9 - MYCIN, developed at Stanford University, is used by physicians to diagnose bacterial blood infections. Yi-Tzuu Chien & Jay Liebowitz, Artificial Intelligence, in 2 EN: YCLOPEDIA OF PHYSICAL SCIENCE AND TECHNOLOGY 1, 14 (Robert A. Meyers ed., 1987). FamilyCare Software from Lundin Laboratories, Inc. gives pediatric advice. Eliot, supra note 5, at 9
- XCON was developed by Carnegie-Mellon University for use in configuring Digital
Equipment Corporation’s VAX computers. Chien & Liebowitz, supra note 6, at 15. - Laurence H. Reece III, Defective Expert Systems Raise Personal Injury Liability
Issues, NAT’L L.J., Oct. 12, 1987, at 24. - It has been reported that a patient died from excess radiation when a computer-controlled radiation-therapy machine malfunctioned. In another incident, payment was stopped
on checks which, due to a software error, paid bondholders $4 million in excess interest. In
another instance, twenty sailors were killed due to a problem with a computer-controlled air-defence missile system that malfunctioned as a result of an error. Bob Davis, Costly Bugs,
WALL ST. J., Jan. 28, 1987, at Al; L. Nancy Birnbaum, Strict ProductsLiability and Computer Software, 8 COMPUTER/L.J. 135, 144 n.64 (1988). See also Bev Litulewood & Lorenzo
Strigini, The Risks of Software, Sci. AM., Nov. 1992, at 62, 62-63. - Michael C. Gemignani, Product Liability and Software, 8 RUTGERS J. COMPuTERS, TECH. & L. 173 (1981)
- “No complex computer program has ever been marketed that did not have some
defect, somewhere.” Hayes, supra note 3, at 115 (quoting Robert Cesari). But see Littlewood
& Strigini, supra note 9, at 62 (stating that, in theory, design faults in a program could be
detected and removed). - Hallevy G.: The Criminal Liability of Artificial Intelligence entities. http://ssrn.com/
abstract=1564096 (15 February 2010). - Morrisey v. State, 620 A.2d 207 (Del.1993); Conyers v. State, 367 Md. 571, 790 A.2d 15
(2002); State v. Fuller, 346 S.C. 477, 552 S.E.2d 282 (2001); Gallimore v. Commonwealth,
246 Va. 441, 436 S.E.2d 421 (1993) - Weng Y-H, Chen C-H and Sun C-T: Towards the Human-Robot Co-Existence Society: On
Safety Intelligence for Next-Generation Robots, 1 Int.J.Soc.Robot. 267, 273 (2009). - United States v. Powell, 929 F.2d 724 (D.C.Cir.1991).
- Francis Bowes Sayre: Criminal Responsibility for the Acts of Another, 43 Harv. L. Rev. 689
(1930). - Tuthill G.S.: Legal Liabilities and Expert Systems, AI Expert (Mar. 1991).
- Gerstner M.E.: Comment, Liability Issues with Artificial Intelligence Software, 33 Santa
Clara L. Rev. 239. http://digitalcommons.law.scu.edu/lawreview/vol33/iss1/7 (1993). - Ransome v. Wisconsin Elec. Power Co., 275 N.W.2d 641, 647-48. Wis. (1979).
- Cole G.S.: Tort Liability for Artificial Intelligence and Expert Systems, 10 Computer L.J.
127 (1990). - Restatement (Second) of Torts: Section 552: Information Negligently Supplied for the
Guidance of Others. (1977). - Independent School District No. 454 v. Statistical Tabulating Corp 359 F. Supp. 1095. N.D.
Ill. (1973). - Stanley v. Schiavi Mobile Homes Inc., 462 A.2d 1144. Me. (1983).
- “Artificial Intelligence and Legal Liability” J.K.C. Kingston (2016)