In a growing number of fields, AI systems show better performance rates than humans. The potential of these systems in various applications is immense. AI systems in the medical and transportation sectors, for example, are expected to save countless lives and confer significant economic advantages to society as a whole. AI systems however, inevitably cause and will continue to cause damages. This Explainer provides an overview of the debate on how society ought to treat damages caused by these systems.
Introduction: How Can AI systems Cause Damage?
Artificial Intelligence (“AI”) systems are used in various contexts of our daily lives. Outperforming the human brain in a variety of ways, they make or assist in making decisions that, until recently, were reserved for humans. AI medical applications, for example, are used to reach better diagnosis decisions, as well as decisions on optimal treatments. AI HR recruiters, to give another example, recommend or decide which applicants to hire. AI systems may also assist judges in determining the risk posed by offenders and thus affect their sentencing, and may also determine eligibility for housing, for credit or for state benefits.
In a growing number of fields, AI systems show better performance rates than humans. The potential of these systems in various applications is immense. AI systems in the medical and transportation sectors, for example, are expected to save countless lives and confer significant economic advantages to society as a whole.
AI systems however, inevitably cause and will continue to cause damages. In the context of transportation, for example, autonomous vehicles – which rely on AI systems – have already shown they are not insusceptible to accidents. AI systems have also extended higher credit to men than women, have favored white patients over black patients in making medical appointments, and have produced racist or sexist expressions ( “Chatbots,” – virtual representatives that may conduct a sales conversation – are AI-based features that, as it turns out, are capable of coming off as misogynist and hateful). AI systems in the service of the government or justice system have mistakenly deprived individuals from well-deserved social benefits, and based their judicial sentencing recommendations on, among other things, parameters that are discriminatory.
AI systems boast such opportunities for humankind, that society has an interest in encouraging the development and usage of these systems. This Explainer provides an overview of the debate on how society ought to treat damages caused by these systems.
Background: What is AI?
While there is no single definition of the term “artificial intelligence” (“AI”), it generally refers to systems that are capable of reaching decisions based on their interaction with the world. Autonomous vehicles are a well-known example of an AI system. The cameras, sensors and other devices installed in autonomous vehicles allow them to receive input (data from the environment such as information about traffic, weather conditions, current location, etc.) and process it in order to reach the output (e.g. steering the vehicle in the desired direction and speed).
In general, AI systems make their decisions based on learning from vast amounts of data, and making predictions based on their previous learning. For example, AI medical applications may diagnose diseases based on large datasets that they have “trained” on. Having gone over millions of pictures of tumors and being told which are malignant and which are benign, for example, AI systems can predict the likelihood of a new tumor being malignant or benign. This technique, generally referred to as “machine learning,” is called “supervised learning” when the systems are “told” in advance about the distinction between malignant and benign tumors and are asked to classify future tumors based on these pre-defined two groups. Given AI systems’ unparalleled capabilities for processing vast amounts of data, they may also be taught through “unsupervised learning,” where the data they receive was not pre-classified. Rather, the AI systems themselves, using their immense computational power, identify correlations and patterns (that may be unbeknownst to humans) and base their predictions on them. For example, when researching the risk factors that might be associated with a certain disease, unsupervised AI systems may review billions of medical records and identify factors that are linked to an increased risk (be it factors that the medical world has already considered, such as age, blood pressure etc., or surprising reasons that were never thought of as related to the disease).
Why is There a Difference from a Legal Perspective?
Current Legal Doctrines do not Necessarily Reconcile with the Nature of AI Systems
Traditional legal frameworks of tort law, which in general is the branch of law seeking remedies for damages sustained by third parties, “know” how to handle damages caused by humans. They also “know” how to handle damage produced by machines. Several routes may be pursued in the case of the former, the main one being that of negligence. Under the negligence framework, persons (or corporations and other legal entities for that matter) that caused damage are held liable and required to pay the injured party when certain conditions are met. For example, when a patient sustains an injury during an operation, he or she may sue the physician and hospital, and be awarded damages if the court finds the defendant(s) were negligent. Very generally speaking, negligence will be established when the person (or other legal entity) that caused damage acted unreasonably. If a physician acted unreasonably when performing the operation that caused damage, for example, then the physician (and potentially the hospital where he or she is employed) will be found liable and ordered by the court to pay damages to the injured.
The existing legal framework also “covers” damages caused by products. For cases such as a tire that has exploded while driving, a lawn mower that fell apart and projected a piece that hit its operator, or even an auto-pilot system that caused an aerial accident, the law has employed the doctrine of product liability. Under said doctrine, the injured can generally recover damages from the manufacturers or sellers of the product, if the injured can show that there was some defect in the damaging product.
For several reasons, many argue that AI systems do not fall within the traditional tort framework, and thus cannot be subject to the negligence or product liability doctrines. Unlike a person (or a company), AI systems do not have a legal status (interestingly, it should be mentioned that some have called for awarding legal status to AI robots, but this has so far remained a theoretical idea). Lacking legal status, AI systems are unable to pay compensation for damages that they have caused. Lacking a deep understanding of “right or wrong” as well as the ability to truly comprehend the consequences of their actions, AI systems will not be deterred from acting negligently just because such a behavior is associated with sanctions. The negligence doctrine therefore is viewed by many as inappropriate for AI-induced damages.
At the same time, the fact that AI systems may reach decisions in an autonomous and often unforeseeable manner, render them different than a mere product or tool in the hands of a human operator. Accordingly, many believe that this makes them unsuitable candidates for the application of the product liability doctrine.
Determining Who is Liable Might Be Very Complex
Moreover, when AI systems are concerned, determining liability may become a very complicated business. AI systems are often referred to as “black boxes,” whose decisions can not necessarily be anticipated or explained after the fact. Understanding why the damage has occurred and who is at fault is further complicated because it is often the case that many different stakeholders are involved. For example, in the case of an autonomous vehicle involved in a car accident, the damage could be the fault of a human driver present in the car (if any), of a pedestrian, of other drivers or of other autonomous vehicles on the road. It can also be the fault of the lighting system malfunctioning, or of poor road or weather conditions. The car itself may also be the source of damage, for example if the brake system failed. But, unlike in the case of a human driver and an ordinary car, autonomous vehicles “add” additional potential wrongdoers who might be the reason for the damage. The AI system of the car might have caused or contributed to the accident (for example, if the AI system “trained” on “reading” traffic signs only in good weather conditions, and as a result failed to recognize a stop sign in stormy weather). Notably, the car’s AI systems may comprise of different elements not all necessarily manufactured by the same manufacturer. Moreover, the owner of the vehicle may have installed additional and unrelated AI systems as “patches” that communicate with each other and “feed” each other information. Lastly, AI systems may rely on information transmitted to them by other external AI systems through what is known as the “internet of things” (iot) where electronic devices communicate with each other without human involvement. For example, an AI-based lighting system (or another autonomous vehicle on the road) may send inaccurate information to an autonomous vehicle, leading to an accident. When an accident occurs, therefore, the difficulty in understanding why the AI system acted as it did, coupled with the high number of potential wrongdoers (some of them AI systems themselves), makes the topic of liability for AI driven damage a tough nut to crack.
How Can it be Solved?
Different governments, commercial stakeholders and academics have proposed different approaches to address liability for damages caused by AI systems. Some argue that AI systems are still machines and ought to continue being subject to the product liability regime, while others chose to focus on these systems’ growing resemblance to humans (in the type of actions they perform and the type of damage they may cause) and propose to adjust the negligence framework to render it applicable to AI systems as well.
Additional proposals have been to draw analogies between different types of agency relationships (such as employee-employer) and the relationship of an AI system to its developers. In other words, to view the AI system as the agent of its developer, and assign liability to the developer in a manner similar to assigning liability to employers for the damages caused by their employees (similar to how hospitals may be liable for damages caused by their physicians). A recent approach advocated by the European Commission is to distinguish between high- and low-risk AI systems, and apply some sort of strict liability to the operators of these systems, such that the operators of high-risk AI systems will be liable for damages they have caused even if there was no fault on the operators’ account.
Lastly, many have promoted the solution of mandatory insurance that would provide compensation to those who suffered harm as a result of the decisions or actions of AI systems, without necessarily having to determine who was at fault.
Looking Ahead
While these directions are all being discussed, some in the form of proposed laws, to date there is no clarity on the question of how to determine liability for damages caused by AI systems. Examples from several jurisdictions worldwide show that plaintiffs suffering damages that were caused by AI systems invoke various causes in their allegations, raising both product liability and negligence claims.
When choosing among different tort solutions that would apply to AI-induced damages, the legislators will need to strike an optimal balance between the desire to compensate those who suffered harm and the desire to encourage the development of safer systems, along with the need to avoid discouraging the development and use of beneficial technologies. To do so, legislators may adopt approaches that focus on the potential risk posed by different AI systems (such as the approach of the European Commission), may adopt measures that focus on auditing obligations of AI systems (such as those promoted in the United States) or may tailor legal solutions based on a specific sector of the AI system or its specific usage. As more cases of AI-induced damage reach the courts around the globe, we will see an increasing variety of concrete answers to the million dollar question of who is liable for damages caused by AI.
Further Reading:
- Omri Rachum-Twaig, “Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based ” 2020 U. Ill. L. Rev. (2020) [edition/volume? Page nos?]
- Ryan Abbott, “The Reasonable Computer, Disrupting the Paradigm of Tort Liability,” 86 Geo. Wash. L. Rev. (2018)
- Karni A. Chagal-Feferkorn, “How Can I Tell if My Algorithm was Reasonable?” 27 Mich. Tec. L. Rev. (2021) [edition/volume? Page nos?]
- Mark A. Lemley & Brian Casey, “Remedied for Robots," 86 U. Chi. L. Rev. (2019) [edition/volume? Page nos?] [and so forth: articles in quotation marks, w/o italics + journal no. and pages]
- Sophia Duffy & Jamie Hopkins, Sit, Stay, Drive: The Future of Autonomous Car Liability, 16 SMU Sci& Tech. L. Rev. (2013)
- Henrique Sousa Antunes, Civil Liability Applicable to Artificial Intelligence: A Preliminary Critique of The European Parliament Resolution of 2020
- Anat Lior, The AI Accident Network: Artificial Intelligence Liability Meets Network Theory, 95 Tul. L. Rev (2021)
- Lothar Detemann & Brunce Perens, Open Cars, 32 Berkeley Tech. L. J. (2017)
The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Heinrich Böll Stiftung Tel Aviv and/or its partners.