Living in the Present, Anticipating the Future: Ascribing Liability for Artificial Intelligence

LUMS

Share

Aman Rehan, Hammad Ali Kalhoro

Abstract

For any legal system, determining how liability will be ascribed to a particular person is a difficult task. However, a recently popularised conundrum in legal literature considers the question of legal liability for artificially intelligent computer systems. With the advent of COVID-19, the adoption of new technologies is accelerating, and the role of AI in our lives is only going to increase. What is often overlooked is that such technologies are usually premised on the “deep learning” system, creating uncertainty in decision making, experience-based learning, and reactions to events. Considering the issue of ascribing liability for harms caused by AI, this paper scrutinises these shortcomings. It highlights how legal systems have the propensity to do more in the promulgation of industry-wide standards relating to AI products. With rapid development of AI technology and the increasing reliance on it by humans, a failure to promulgate and adopt such standards may have catastrophic consequences. 

Introduction

In 1842 Ada Lovelace,  an aspiring computer scientist, showcased an anomalistic combination of several abilities, including a proper appreciation of the scope, capability, and future of computer science and technology. She perceived human beings to have become too comfortable maintaining the traditional way of doing things – the “comfort” in question refers to the notion of stagnation of the development of humanity. Therefore, in a then inconsequential event, her penning down of the first algorithm for a computing engine would forever alter the way we would interact with each other, and more importantly with machines. Since then, the constantly evolving world of technology has created significant legal challenges which can easily be mistaken for “anomalies”.

The world of Artificial Intelligence (“AI”) is but one stream of this transformative technology expected to have an everlasting influence on the world of robotics, transportation, manufacturing, cybersecurity, and even medicine.  The benefits are clear, however, the use of AI to complete tasks also involves an undertaking of a certain degree of risk for error. Given the sheer number of products and services that rely on AI, there will naturally be instances in which AI does not produce desirable results. While the majority of these failures will be benign, the law must adequately cover situations wherein the failure of AI can directly cause tangible harm to both people and property. This conundrum has led to an increased advocacy for re-evaluating consumer liability laws around the globe.

Considering this perplexing legal challenge, this paper aims to explore the potential of product liability laws as an effective mechanism for addressing AI harms. The following legal questions will be explored in detail: firstly, are algorithms and products similar, and if so, what metric can be used to establish their similarity? Secondly, can certain algorithms be compartmentalised as “products” using the metric described? If so, what kind of liability regime will be applicable to them? Thirdly, do the existing legal instruments adequately protect AI consumers? Fourthly, what can be done to overcome the shortcomings of existing legal instruments? And finally, if liability can be ascribed to robots, should rights be granted to them as well? A systematic scrutinization of these questions will help uncover the extent to which work needs to be done to protect consumers from the potential dangers of AI. 

Definition of Artificial Intelligence

Before grappling with these questions, it is prudent to delve into a brief exposition of both the history and definition of AI, because AI as we know today is a product of historical developments rooted in religion, mythology, literature, and even pop-culture. Robert M. Geraci highlights the ways in which technologists have derived inspiration regarding AI from stories found in scriptures and popular culture: “to understand robots, we must understand how the history of religion and the history of science have twined around each other, quite often working towards the same ends and quite often influencing another’s methods and objectives.”  The history of AI is commonly traced back to Charles Babbage and Ada Lovelace, who are deemed to have not only predicted the advent of AI but also put together designs of machines which were geared towards carrying out “intelligent tasks”.  However, AI is not a child of the modern era; and the concept of intelligent beings being created from inanimate objects can be traced back to ancient texts.  Along with scriptures, AI has also been explored in literature and the arts,  as well as pop culture.   

While religion and popular culture alike have provided insight into the development of AI, the myriad of representations and portrayals have led to misleading impressions in people’s minds. However, legislation or regulation based on such impressions is not acceptable in any developed legal system. This principle is also expounded by legal theorist Lon L. Fuller, who defined eight formal requirements for a legal system to function in conjunction with a set of moral norms which allows humans the opportunity to not only engage with the law but also amend their actions accordingly. One of these requirements is that the citizens under a legal system must know of the standards which are applicable to them, implying that the laws should be comprehensible.  Therefore, without a proper definition, the application of a regulatory mechanism to something as omnipotent, rapidly changing, and fluid as AI is a Herculean task. The definition to be used for this paper is the one proposed by Jacob Turner in his book Robot Rules: Regulating Artificial Intelligence: “Artificial Intelligence is the ability of a non-natural entity to make choices by an evaluative process”.  

Within this definition, it is implied that the ability to make choices confers a certain level of autonomy, albeit not absolute autonomy. An artificially intelligent entity will be able to make an autonomous choice even if there is human input at any stage. As this paper focuses specifically on algorithms, this paper will follow Jack Balkin’s classifications which treat both robots and algorithms as being part of the “algorithmic society”.  In an “algorithmic society”, societal organisation revolves around social and economic decision-making through algorithms. The algorithms not only make the decisions but also carry them out in some cases. In this sense, robots and AI merely become a “special case of the Algorithmic society”.  Additionally, the “algorithms” referred to in this paper are those which are computerised. These algorithms can cause damage without any physical embodiment (other than computer hardware) or human intervention.     

The limitations which come with functional definitions, however, apply to any legislative effort. Hence, while it is important to define AI for conferring certainty into the law, it is also imperative to avoid precise boundaries and ossify the law. This is also logical given the rapid developments which are made in this field. In this paper, algorithms will be compartmentalised into the larger ambit of machine learning and adaptation, which occurs whenever a machine can alter its data, structure or program in a way that its performance in the future is expected to improve.  The term “machine learning” was first defined by Arthur Samuel as computers being given the “ability to learn without being explicitly programmed.”  This categorisation results from AI being capable of “independent development” i.e. the ability to learn from data sets in a manner which is unforeseen by its designers. 

Relevance

Since this paper lies in an intersection of law and technology, it might be deemed too futuristic by some. Often, one is not even aware of the leaps being made in the field of technology. Indeed, it is common for companies to produce new technologies through upgrades and software patches; while these changes may be unnoticeable at first, they are cumulatively quite significant. An example of this is the changing user interface of social media platforms such as Facebook and Instagram. The tendency to ignore incremental changes may lead to undesirable yet avoidable consequences. McKinsey and Co., an international management consultancy company, has provided research which estimates that the technological revolution is “happening ten times faster and at 300 times the scale, or roughly 3000 times the impact.”  Verily, urgency in this case is not only justified via the magnitude of change but also consolidated by a sharp increase in the number of aggrieved people around the globe. 

The culmination of all the fears related to AI was the horrific death of Elaine Herzberg on 18 March 2018, which played a significant role in bringing AI technology to the forefront of both the local and international media. Herzberg, a 49-year-old resident of Arizona, was immediately declared dead after being struck by a Volvo SUV. The vehicle was said to have been cruising at a speed of 80 kph at night in Tempe. The horrifying incident was directly attributed to the AI lacking “the capability to classify an object as a pedestrian unless that object was near a crosswalk,” as was affirmed by the National Traffic Safety Board, or NTSB in Arizona . As a direct consequence of this shortcoming, it could not correctly predict her path and concluded that it needed to brake just 1.3 seconds before it struck her as she wheeled her bicycle across the street a little before 10 p.m.  For critics, the laissez-faire attitude adopted by the state of Arizona was particularly problematic. Many went as far as to question the rationale behind introducing such nascent technology to the state, specifically without giving much forethought to its potential dangers. A fate similar to Elaine’s was also suffered by Joshua Brown, a 40-year-old resident of Ohio, after he placed his newly purchased Tesla Model S in its self-driving “autopilot” mode. A malfunction of the AI at the heart of Tesla’s autopilot mode resulted in its failure to distinguish a large white 18-wheel truck from a trailer. Resultantly, the car attempted to drive at full speed under the trailer, amounting to the fatality.  An example closer to home, within Pakistan, can be that of the machine-learning algorithm guiding the U.S. drone program. It is argued, in a report published by Ars Technica, that ‘SKYNET’ (the algorithm at the heart of the planes) may have wrongly targeted thousands of innocent civilians, leading to many unnecessary deaths.  It was also found that the algorithm performed well strictly in terms of the outcomes it was trained for – with 0.008% of the targets being wrongly classified.  However, if this data were viewed not as mere numbers, around 15,000 innocent people were killed. All these cases highlight AI’s propensity to cause physical harm; however, such harms may not always be physical. 

For instance, in late 2013, IBM teamed up with the University of Texas’s Cancer Center in the hope of developing a new “Oncology Expert Advisor” system. The first line of their launch press release stated the following: “MD Anderson is using the IBM Watson cognitive computing system for its mission to eradicate cancer.”  Five years following the press release, a review of the internal IBM documents uncovered how their AI system was giving not only erroneous, but quite dangerous, cancer treatment advice. Ultimately, the entire venture failed to achieve IBM’s ambition, while simultaneously costing them $62 million.  Thankfully, the AI system was trained on hypothetical patient data, resulting in only monetary loss rather than loss of life. 

Another product that proves the potential for non-physical harm through AI reliance is that of the new Apple iPhone X.  A well marketed feature of the new phone was its “Face ID” technology which allows its owner to unlock their phone by simply showing their face to the front camera. Apple described this mechanism as being 10 times more secure than the traditional fingerprint mechanism. One year after the release of the phone, hackers successfully attempted to utilise 3D printed masks as a loophole to the system. A Vietnam-based security firm, Bkav, affirmed these claims and further stipulated that at a mere cost of $200, people could access the personal data of anyone who relied on the Face ID technology.  The work of Bkav provides a fascinating glimpse into the shortcomings of AI. More importantly, it shows that the rise of technology coincides with an increase in our reliance on algorithms to regulate our daily lives. The resultant risk to privacy and data security is a consequence that can be linked directly or indirectly to AI, as data-dependency is a fundamental characteristic of algorithms.

The legal implications become further pronounced when one delves into contracts involving AI. Members of the public enter contractual arrangements daily, through a tap on their smart phone. Ideally, such an arrangement should involve both parties being fully aware of the obligations which bind them. In reality, mobile app users generally gloss over the “Terms and Conditions” or the “End User License Agreement” before clicking the “accept” box. Such quasi-hidden contracts are a feature of many of the free utilities which users enjoy-from mapping services to photo-editing applications. A significant manifestation of the use of data acquired through these quasi-hidden contracts occurred in 2016 when Cambridge Analytica, a data- analysis firm, used the psychological profiles of millions of American Facebook users for the Trump campaign in the US elections.  It is clear, therefore, that more must be done to determine the important legal questions raised at the helm of ascribing liability, especially when we fail algorithms or when algorithms fail us.

Are Algorithms and Products Similar?

In ancient Rome, there was debate on whether liability could be ascribed to a horse, which was characterised as a “semi-intelligent entity”.  Although there was a view that the horse should pay for its actions, the more popular view was to extend the liability to its human owner. The US Judge Frank Easterbrook elaborated on this example while opposing the idea of a separate regime for cyber law, stating that doing so is as futile as asking for a “Law of the Horse”.  Instead, he advocated for general rules to be studied in order to approach specialised areas of the law—otherwise, “the Law of the Horse is doomed to be shallow and to miss unifying principles.”  Keeping this principle in mind, this paper will approach the idea of creating a product liability regime for algorithms by extrapolating from already established legal principles.

It is undoubted that the positive benefits of AI are immense: they can eliminate human error by making decisions which are more consistent, efficient, objective, and reliable. However, as mentioned before, even AI is susceptible to mistakes; in the event of an AI error, aggrieved humans will seek compensation and turn to liability regimes already in place. The questions of attribution which arise at this point include how the fault in the algorithm should be organised, who should be held liable in the event of an AI error, and what type of approach should be taken towards relief and remedy. To approach these questions, one must create a comparison between algorithms and products, as the existing product liability framework needs to accommodate the advances being made in technology.

Firstly, it is pertinent to define a “product.” A product is simply defined as “something that is made to be sold, usually something that is produced by an industrial process.”  While this definition does not immediately clarify the distinction between a “product” and an “algorithm”, as an algorithm can be made for sale, but algorithms have a specific quality which distinguish them from the typical washing machine or television: an inherent decision-making process. For instance, algorithms have the unique ability to not only perform complex actions and take intricate decisions, but they do it at a level which goes beyond computations—an example being the e-commerce industry and the predicted omnipotence of algorithmic agents which will eventually bypass most human decisions. 
Algorithms can also make decisions of a moral character, i.e., making choices which would be considered as moral or immoral if made by a human. Germany has the unique distinction of introducing a set of ethical guidelines which must be followed by autonomous vehicles. For example, the “Ethical Rules for Automated and Connected Vehicular Traffic” include that the “protection of individuals takes precedence over all utilitarian considerations.”  Another instance of this was when a medical algorithm was found to prefer white patients over black patients.  The algorithm was aimed at predicting which patients would benefit more from extra caregiving. Even though the algorithm itself was not intended to be racist i.e., the way it categorised data did not factor in a patient’s race, yet it had prioritised patients in terms of how much the person chosen would cost the healthcare system in the future. Costs incurred by black patients were around $1800 less than white patients with the same chronic conditions.  It should be noted that costs incurred by an individual is not a race-neutral metric as it depends on, among many other things the person’s capabilities to afford healthcare and the healthcare facilities available. As a result, the algorithm scored both white patients and black patients as having an equal risk of health problems in the future, even though black patients had many more health problems. In instances such as this, one may conclude that the same laws which apply to human moral choices should also apply to algorithms carrying out tasks of a moral character. However, the decision making of the algorithm was again based on the information, which was being provided to it, so there was a degree of human input as well. This is where AI takes a departure from the traditional confines of the product liability regime.

Algorithms also differ from products in the sense that these are capable of learning from datasets, even in manners not perceived by their manufacturers. While this point was amply underlined by the medical algorithm mentioned above, another example from daily life is Instagram. Being a social networking site, Instagram allows users to upload pictures and videos, using algorithms which learn user preferences, filter out spam, and carry out targeted advertising.  It contains an in-built test analytics algorithm called DeepText which not only understands the context of language with human-like accuracy, but also helps in combatting cyberbullying and harassment.  The ability to adapt and improve an AI system in manners not “predetermined by its designer”  has implications when it comes to ascribing liability: harm caused by a product may be traced back to the manufacturer, but legal concepts may be challenged if the resultant algorithm does not operate in a way intended by the manufacturer. Foreseeability is one of these legal concepts. As demonstrated, there is a key difference between products and algorithms: the latter involves less human foreseeability in its use.

In order to determine a conclusive metric for differentiating algorithms from products, it is prudent to further categorise machine learning into “supervised”, “unsupervised”, and “reinforcement” learning. These categorisations may be used to determine the level of autonomy an algorithm has. While the terms “autonomous decision-maker” and “autonomous algorithm” are used to a great extent—and often interchangeably—they differ in meaning.  On one hand, autonomy can refer to whether an algorithm has the required authorisation to perform a specific task, without human input or permission.  On the other hand, in a different context, autonomy could signify a characteristic of the algorithm itself i.e., its ability to “teach” itself certain tasks or “understand” its actions and their implications.  In essence, the level of autonomy depends on the type of algorithms i.e., whether its learning is supervised, unsupervised, or reinforced.

While there are several ways to categorise autonomy, this article will now delve into the algorithm’s ability to “self-learn” and carry out tasks not foreseen by its programmer or manufacturer. 

Autonomy and the Type of Algorithms

Within the context of this paper, a discussion of autonomy and algorithm types is important as autonomy remains one of the core differentials between AI and a product. The autonomous nature of AI makes it impossible for a manufacturer to envisage all potential actions carried out by the AI. The three types of algorithms help us identify which AI products have a higher propensity to be autonomous in the future, and in turn are more distinct than products. 

In Supervised Learning, the algorithm is trained with data, such as a “training set,” and is used to derive “good” predictors for a required value.  In such algorithms, it is not sufficient to merely provide feedback that the system was erroneous; rather, specific messages which highlight the error are required for proper functioning. The feedback allows the system to hypothesise ways to categorise data which may be unlabelled in the future—data which is also updated based on the feedback the algorithm is provided.  While there is some level of human input involved, which may allow one to ascribe liability easily, it should be noted that the hypotheses regarding the data as well as the improvements made with each feedback turn the algorithm into a version which was not programmed by its manufacturers.

In Unsupervised Systems, the algorithm is not trained with data but carries out the task of deciphering patterns in the information that may lead to the correct answer for a particular example.  The degree of autonomy enjoyed by unsupervised system is greater than supervised systems. The Chief Scientist of Uber, Zoubin Ghahramani, has described unsupervised learning as “finding patterns in the data above and beyond what would be considered pure unstructured noise.”  However, both these systems involve development to a stage which was not pre-programmed at the time of manufacture. 

In Reinforced Learning, the algorithm is not pre-programmed to take specific actions; it has to map out situations and actions through machine learning in order to yield the maximum reward. Essentially, it tries different options until it achieves a certain goal because it is not taught the process to achieve a certain goal.  Reinforcement Learning has been particularly successful in games such as chess, which was shown by the program AlphaGo. The CEO of DeepMind has described this program as neither a human, nor a program, but “almost alien.”  Along with games, recent research has shown the possibilities of reinforcement learning in the field of medicine as well.  This also sets algorithms apart from products, as algorithms may reach a point whereby, they can function without human input.

Liability Regimes

Before ascribing a liability regime, it is pertinent to first delve into the different liability regimes which may be applicable to the law on AI and algorithms. Legal systems are mostly two tiered: with civil law and criminal law. AI in general, and algorithms, can lead to challenges in both these regimes. Civil law, also referred to as private law, essentially governs the legal relationship between private parties, and is used to either create, remove, or alter rights. Civil law liability arising from tort or contract may not have effects which are as harsh as those arising from criminal liability. 

Criminal law, on the other hand, is mostly enforced by the state and can be invoked even if the criminals have not agreed to be bound by them. To designate an act as a crime is society’s way of denouncing conduct in the harshest way possible. Ergo, the burden of proof required to prove someone guilty is higher in criminal law as compared to private law. 

Civil Law Liability Regimes

When it comes to private law, there are basically two sources which may relate to the ways in which algorithms may be governed: obligations through contract and obligations arising out of civil wrongs.  Within civil wrongs, there are several categories which may provide a liability regime. These are negligence, strict and product liability, and vicarious liability.

The application of these regimes to AI is problematic for several reasons. Firstly, upon examining key legal questions relating to the tort of negligence, one may arrive at the conclusion that the duty of care will not always fall on the owner of the AI. Rather, it can extend to the designer of the AI or an intermediary party who may have taught, trained, or added to it. This complexity of tracing liability across the supply chain can result in inconsistent application of the law. Secondly, the central concern in negligence cases is whether the defendant was acting in the same way an ordinary and reasonable person would act in a similar situation. A problem arises when this notion is being applied to humans relying on an algorithm or algorithms themselves. One option could be to deduce what the user of the algorithm or the reasonable designer of the AI might have done if faced with the same circumstances.  For instance, to avoid instances such as the death of Elaine Herzberg,  it may be reasonable to design a car in such a way that it enters a fully autonomous mode only when there is a relatively clear motorway, rather than in a crowded street.  This solution, however, runs into problems in situations where there is no human input in any functions of the AI, which raises the question of whom the liability can be imposed upon. 

Similarly, applying strict liability may lead to certain drawbacks for the technology industry. For the victim, the advantages of strict liability are obvious: it does not require them to prove causation between the harm caused and the loss suffered by the victim.  This liability regime only expects the victim to prove that the risk posed by the technology surfaced by causing them harm. It should be noted, however, that strict liability alone would result in an increased risk of liability of those in the technology industry or those who benefit from the technology.  To counterbalance this effect, restrictions and liability caps may be used. However, such caps are justified with the view that the risk becomes insurable, given that strict liability statues usually prescribe insurance for liability risks. Naturally, such a regime is deemed to have a negative effect on the advancement of technology, as manufacturers and companies may see strict liability as a deterrent to promote technological research, which in the 21st century is an important economic and social goal for many countries across the globe. 

Criminal Law Liability Regime 

In addition to civil law liability regimes, instruments within the ambit of criminal law have also been used to play an increasingly relevant role in the context of AI. The notion of exclusively utilising criminal liability for AI entities is challenging for many reasons. For example, in situations wherein an AI entity is successfully incarcerated for one year, how may the implementation of such a sentence manifest itself? This conundrum is extenuated in cases wherein the AI software is not part of something physical (such as a robot or a machine), which essentially makes it impossible for an arrest to take place. Similarly, in more critical cases involving sentences of capital punishment, the lack of a physical body to arrest and incarcerate may make such liability impractical.  These issues are not just restricted to physical sentences, but also extend to monetary punishments, particularly fines. Most sentenced AI entities will lack the abilities to manage their own finances, such as own a bank account, thus making the notion of fining an AI entity unrealistic.  These challenges greatly undermine the inherent foundational aims of imposing criminal liability in the first place: retribution, deterrence, rehabilitation, and incapacitation. Therefore, imposing liability based on a criminal liability system may prove to be counterintuitive in terms of limiting harms that may arise from the failures of AI.

Ascribing Liability

Considering these challenges and the discussion above, utilising a product liability regime, which on its own, entails an intricate mix of both contract and tort law, seems most fitting for AI.  Product liability deals with establishing liability in the event that a product causes harm. The party deemed responsible for the harm caused can either be the producer of the product or the intermediate suppliers as well.  The defect in a product is given more importance than the fault of an individual. For Product liability laws to apply to algorithms, harm caused by any AI can be redressed if the affected party brings a claim against the producer or any supplier at any stage of the supply chain.

There are certain advantages of the product liability regime. Firstly, a sense of certainty is attached to this regime in identifying the party to be held responsible; the aggrieved party will not have to seek out different parties in the supply chain and ask for their relative contribution to determine their relative fault. Instead, upon locating the supplier or producer of the algorithm, the party can claim the entire amount from them. The burden of proof will lie on the relevant producer or supplier, who may deflect liability to other parties if necessary. In contrast to a fault-based liability regime, a strict liability regime would not entail the courts determining the level of duty of care accrued in the process of manufacturing and selling AI, as this is a difficult exercise keeping in view the heterogeneous nature of AI. Moreover, strict product liability also encourages developers of algorithms to ensure that the products containing them have control and safety mechanisms intact. An example of this was the announcement made by Volvo that it would assume complete liability for the actions of its autonomous vehicles.  This placed pressure on its competitors to meet the same standards to ensure that self-driving cars become safe to use in everyday circumstances. Additionally, even if an algorithm develops and acts in unforeseeable ways, the producer or designer of the algorithm itself will be looked upon as the person best equipped to control and understand the associated risks.  An example of this are the prompt and sophisticated measures taken by Google in the wake of an accident caused by one of its self-driving cars. Google took cognizance of the causes of the accident, stated the ways in which the scenario was similar to normal interactions and expectations between human drivers, and also took responsibility by improving its software further. 

Considering these advantages, the product liability approach makes sense as opposed to strict liability for a multitude of reasons. Firstly, within the broad ambit of both contract and tort law, there are various theories of liability that can be asserted. These include breach of warranty, misrepresentation, negligence, design defects, failure to warn, manufacturing defects and more.  As mentioned before, the majority of AI is mostly comprised of decision-assistance tools, and it makes sense to turn to negligence law in case the usage of such a tool result in harm. Therefore, to ensure a maximum coverage of a multitude of claims, it is more fitting to impose a product liability system. Secondly, applying product liability laws will resultantly force the courts to fall back on the reasonableness standard, which in turn should ensure a greater access to justice while bringing down trial costs. The reasonableness standard is ideal as it involves adopting a holistic mechanism of scrutiny when coming to a decision. In instances of product liability, the courts will therein be able to look at factors including but not limited to the actual harm caused, the circumstances surrounding the harm and the decision-making process adopted by both the parties which in turn, should lead to fairer decisions. Thirdly, a relatively lenient reasonableness standard will not come at the cost of computer innovation and a reduction in the usage of machines. This is important for several reasons, as innovation is an important aim for many countries into the future. For instance, the UAE in its vision for 2030 highlights innovation as an important aim for its foreseeable future. It has taken many steps, such as setting up special economic zones to promote startups, launching accelerators, such as the Ghaddan 21 and offering subsidies, support, and funding to innovative companies. For countries like this, any legislative instrument governing machines cannot hamper innovation, otherwise they will be disincentivised to adopt it. Lastly, in lieu of deterrence, a rule of no-fault liability might not be as effective as the reasonableness standard. For instance, within the parameters of a no-fault liability regime, “normal risks” of using technology and machines could be actively excluded from meriting compensation. Therefore, not many organisations will be discouraged from adopting unsafe practices. In a regime falling back on the foundations of the reasonable standard, “normal risks” would not exist. Conversely, every judgment will be premised on the factors listed above (decision making, harm, etc.) making it a better fit for ascribing liabilities to algorithms and their creators. 

Consequently, the utilisation of product liability laws will prove to be a viable solution to the question of ascribing liability posed at the onset of this paper. In this light, the compatibility conundrum must also be scrutinised. Thankfully, products liability has been one of the most dynamic fields of law since the mid-twentieth century. This is in part due to the new technologies that have emerged over this period, leading courts to tackle a continuing series of initially novel products liability questions. Courts have generally proven quite capable of addressing these questions. There are a number of strategies that can be used to make the transition to this liability regime easier. Primarily, inquiries into AI-based systems and their faults must be informed by the rationale that alleged harms are made by the intelligence software; however, their decisions can be traced to choices made by companies, programmers, and users. If harm is caused, liability must be placed accordingly. The three classifications of algorithms discussed in the section above are pertinent to this discussion i.e., Supervised Learning, Unsupervised Learning, and Reinforced Learning.  By utilising these three classifications, the level of autonomy of the algorithm can be determined, and the liability of the companies/manufacturers can subsequently be ascertained. For instance, the level of human input required in supervised systems is much greater than that required in unsupervised systems, whereas reinforcement systems, have no human input whatsoever. These differentiators are pertinent to the determination of liability. For the courts, the case-by-case determinations of liability for the specific algorithm can be made by utilising expert testimony of industry specialists.

Another approach for this transition is the development of risk utility tests  in relation to AI.  These tests have actively been employed in AI liability lawsuits to ascertain whether alleged defects in design could have been avoided “through the use of an alternative solution that would not have impaired the utility of the product or unnecessarily increased its cost”.  However, the mechanism of application will need to take into account not only the human-designed portions of an algorithm, but the post-sale design decisions and substitutes available to the system as it is able to update automatically. Additionally, it has been discussed that all three types of algorithms on the autonomy scale may lead to a stage of development that was not anticipated by its manufacturers, which must also be considered. 

It must be recognised that it will take many years to develop a substantial body of case law and statutory law specific to the intersection of AI and product liability, while judiciary will not be consistent in its decisions in each case. However, over time, adoption in lieu of the intricacies of AI will be considered by product liability legislation, particularly in terms of emerging technologies. One way to streamline this process is through the utilisation of law reform agencies and voluntary frameworks. For example, the American Law Institute (ALI), is a respected organisation that produces “scholarly work to clarify, modernise, and otherwise improve the law”.  If the ALI or a similar organisation were to develop and publish model principles of law and/or legislation specific to AI products liability, this could help promote greater certainty, predictability, and uniformity in state-level approaches to AI law.
Should Robots Have Rights?

So far, this paper has delved into ascribing liability to AI by developing a liability regime which builds on established legal principles. However, if it is conceded that there are different types of AI with varying degrees of autonomy, then should the varying degree of liability associated with a robot’s decision making be accompanied with rights as well? This question, which may seem bizarre at first, has been brought up at many instances, as rights and liabilities are often conceptualised as co-existing concepts. In 2015, Victor Collins was found dead in the hot tub of James Bates. James Bates was charged with murder and his Amazon Echo, a home speaker device which incorporated an AI virtual assistant, was the “key witness” to the alleged crime. While the Arkansas police asked for a divulsion of data from the period relevant to the murder, it was in 2017 that Amazon argued that the human voice commands and the device’s responses are capable of protection under the US First Amendment. While this argument was not agreed with, it raised important questions as to whether AI has a right to protection of its speech.  Another example is that of a robot called “Random Darknet Shopper,” that purchased ecstasy and a fake Hungarian passport on the dark web. This robot was part of an art installation in Switzerland. It should be noted that it was the robot, not the artist or another human, that was arrested by the St. Gallen police for the unlawful transactions. While the Swiss authorities took cognizance of the artistic value of the robot, the occurrence opened up a debate on the measures to be taken if a robot does cause harm, and whether such liability should also be accompanied by rights being accrued to robots.  

While ascribing liability is a key component of protecting consumers from AI harm, the standalone imposition of liability under an effective regime may raise questions about a state’s moral duty towards new technology and AI. All in all, it might lead one to ponder whether robots can and should have rights.  These questions stem from the debate in the European Union Parliament in 2017, where concrete recommendations were made to the Commission on Civil Law Rules on Robotics. Section 59(f) laid out the notion of corporate personhood as a model of robot rights:  

Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently. 

On the surface, this idea may seem inherently problematic for the establishment of a liability regime as it gives manufacturers a way to escape responsibility for defects that can directly be attributed to them. However, this notion of various entities being characterised within the ambit of “legal personhood” is not as recent as one might assume. For example, the seminal case of Santa Clara County v Southern Pacific Railroad Co.  expanded the ambit of the Fourteenth Amendment to the US Constitution to corporations and established the base for personhood to such entities as well.  Indeed, corporations are some of the most common and oldest examples of non-human entities who have been granted legal personhood. It should be noted, however, that it is an abstraction which “has no mind of its own any more than it has a body of its own.”  While it can be said that corporations can perform actions independent of their directors, owners, and employees, in reality, it is humans who take decisions on the company’s behalf. 

Otto von Gierke, a legal scholar from the nineteenth century, argued that companies are real “group-persons” and cannot be categorised as mere fictions.  This argument can account for the decision-making processes of companies, which, barring sole proprietorships, may not comprise of opinions of a single person but rather the collective will of the company that may be expressed by procedures such as board meetings. Considering the human input involved in companies, it is difficult to make a case for AI personhood based on the same logic.

Considering the discussion above, it is still unclear whether robots should be granted rights. Rights may vary depending upon the liability regime that is established. However, wherein rights are granted, they may be contingent on the realisation of a future where robots may exhibit further functional similarities to humans, meriting a change in legal standards. As of now, violence against machines is not seen as a criminal wrongdoing. Legal systems throughout the globe offer no rights to robots despite them becoming more advanced and being developed with higher levels of AI. In an attempt to remedy this, some have suggested that the right for a robot to not be shut down against its will and the right to not have its source code manipulated against its will should form part of a set of rights for robots in the future.  It is futile to offer such summations of potential rights a robot could be given, especially wherein the technology in question has not yet evolved to its fullest potential.  This waiting period is the first obstacle towards protection, especially when such rights should be universal. 

Similarly, another issue with granting rights to robots is articulating them in the first place. While certain machines have the propensity to “think” rationally in the twenty first century, the notion of rationality for a machine will differ vastly from that of a human. Machines are input with statistics, situations, and moral principles from which the machine distinguishes between “right” and “wrong”. Even though the conceptions of rationality are slowly merging due to the advent of deep learning and its popularisation, this interdependence means that machines still have a long way to go before they can be independently rational and therefore require legal protection in the form of rights. 

Lastly, the parallel between animals and machines, especially in the context of rights, poses a relevant and interesting obstacle. One might argue that machines do not deserve rights protection over animals. Indeed, the discourse on animal rights has only recently gained momentum.  From a utilitarian perspective, however, it is pertinent to provide a certain set of rights within the short term to AI entities and algorithms. It may not be desirable in the long-term to keep AI entities devoid of rights; thus, certain work must be done to provide a specific set of rights to AI entities. 

Therefore, it remains reasonable to state that robot rights are neither a moral absurdity nor a legal urgency. It must be noted, however, that no matter how similar the treatment of robots may be to humans nowadays, there are many years from when robots may be capable of actions forcing us to confront issues as to their rights. Verily, as of now, Section 56’s approach to AI rights might make sense seem plausible, namely via establishing laws of accountability and damage mitigation structures (like insurance) that reflect the differences between autonomous, adaptive, “intelligent” robots, and the algorithms that power them, and traditional machines.  However, we must make sure that this approach is complemented through legal instruments that outline ownership of any intellectual property that such machines might create in their normal functioning that may be explicitly distinct from the underlying algorithms controlling them. In a few years, more heed can be given to future protections as the technology behind AI progresses to the extent where it is seamlessly integrated into every aspect of human life and is thus subjective to extensive liability. Currently, it remains more morally pertinent to focus on the protection of historically non-human exploited groups, such as animals and plants.

Conclusion

The paper uses several distinguishing factors to conclude that algorithms certainly differ from products. Some of the most prominent differences is AI’s ability to make decisions of a moral character, as well as the ability to learn from a data set in a manner which could not be anticipated by its manufacturers. However, the fact that there are certain factors which differentiate products from algorithms does not mean that AI should have a different legal regime altogether. Rather, the existing legal framework of product liability law which contains a mix of both tort and contract law, would be most feasible in addressing the legal questions posed by AI. The compatibility conundrum between existing product liability laws and AI can hence be resolved when the “autonomy classifications” proposed in this paper is employed to determine the extent to which liability can be traced to the manufacturer/company in case a harm occurs. Lastly, this paper argues that a system recognising the rights of robots is not conceivable in the near future as humankind has a long way to go before robots make completely autonomous decisions with no human input. 

AI can make decisions without human input and is characterised by a great degree of autonomy. More specifically, AI is different from products because the manufacturer may not have envisioned a potential action carried out by AI. This happens due to machine learning and the potential for AI to morph into something completely different than what it was at conception. The paper displayed this by highlighting three kinds of algorithms: supervised, unsupervised, and reinforced.

A product liability regime needs to be enforced; however, it should be adapted to the novel nature of AI. Two reasons were highlighted for this: the first being the technological leaps being taken in this field and the growing influence of AI in our lives. Indeed, we have seen an upsurge of digital solutions during 2020 itself due to the advent of COVID-19, and our interaction with AI increased manifold consequently. The second reason is that if the product liability framework does not advance and a holistic framework is not developed, there will be haphazard regulation and conflicting legislation. In this regard, the best practices of the EU may be instructive. Naturally, a multilateral framework will be required to address such an all-encompassing technological phenomenon which knows no bounds.

This paper grappled with the question of imposing liability if an algorithm causes harm and has attempted to propose a system of ascribing liability through an expansion of the existing product liability framework, rather than introducing a different area of law altogether. Additionally, this paper delved into possibility of granting robots’ rights akin to human beings, concluding that it may not be a legal necessity facing us today. The concept of AI, being mentioned by scriptures thousands of years ago, may not be as visibly frightening as the creature in Frankenstein, nor as threatening as the Terminator. However, it is capable of racial discrimination, breach of privacy, and fatal accidents. The liability framework, hence, needs to account for the potential undesirable actions of AI, because at this juncture of history, it is a concept that is continuously advancing and evolving.