“Centralized systems make more and more legal decisions. Often this happens because machines, like computers, apply law. For instance, computer programs prepare tax returns, certify compliance with environmental regulation, keep wage and hour records, and take down material allegedly subject to copyright protection. Should automated law systems be directly liable for the errors they make?”[i]
It was Donoghue v Stevenson[iv] that lay the foundation of product liability law around the world. Even aside from food and beverages, the issue of ‘product liability’ arises in each of the circumstance where we use the aid of machines to reduce human effort. The question hence whether the approach towards human liability, civil or criminal, may be applied to artificially aware machines? So much so that the daily AI may be paving its way to damages, but goes unnoticed for no one has come to account artificially intelligent machines to commit mistakes yet. AI’s inclusion is also predominant in law firms, used to increasingly proofread and prepare documents; some legal systems have started relying on AI systems for passing judgement on petty crimes. To expand on the idea, essentially AI is acting on behalf of humans, working on human data (till present), and knows only so much as a human being can know limited by its subset of information. It is beyond question that an AI may at some point make a mistake, it’s a human invention replicating human work, how could it not. However, does the principle of accountability hold for machines, which can think but artificially, and whose intention if drawn out will be bona fide in each case?
In light of the aforesaid: who do we hold responsible for the damage, pecuniary or punitive, caused by the actions of Artificial Intelligence?
While product liability would be an easy way to combat this gap, it would seem that in some cases the machines are being treated as humans[v] and in other scenarios their manufacturers or software developers are being attributed with responsibility. Essentially if people are made to be accountable on behalf of a machine by the age old principle of vicarious liability or otherwise, it will dissuade companies from innovating with respect to AI and will not garner progress. A system is needed that would hold developers and manufacturers liable for gross negligence on their part where a cause of action arises; and also give developers leeway where it is found to be a mistake in bona fide by the rational machine.
Various suggestions have been made of models on which to base liability of machines in a way that does not put erroneous burden on developers and innovators- standards of reasonableness.
Since the data that AI uses is based on, for the most part, peer-to-peer data collection, gatekeeping needs to be done in order to demarcate what goes and what doesn’t. The foremost authority on data protection law is EU’s General Data Protection Regulation, 2018. Data protection law offers individuals some protection against how automated decision making uses their personal data. In addition to this, individuals are being given transparency as to how a decision has been made about them solely based on automated processing.
The logic is sound, if you have a black print of how the machine came to the conclusion; the developers couldn’t possibly be held liable for it is not induced thinking.
Some AI systems can be subjected to liability of a contractual nature. How do we form a contract with a machine you ask? For example, if your self-driven car malfunctions and causes an accident, the contract will enable you to claim some kind of compensation, subject to certain terms and conditions of course.
In some cases the general principle of negligence will prevail to the use of AI. Negligence entails that there was a duty of care to the service availed. But since the liability can’t be deposed off on a machine, for it not having a legal personality, the negligence could so be attributed to the person directly responsible for the decision made by the AI. Meaning a manufacturing error would bind the manufacturer liable or an error in decision making would either be an algorithm discrepancy or unreliable data issue. Liability could therefore rest with the developer, the manufacturer, the user or the service providing company.
The most suitable answer with respect to autonomous vehicles is Insurance Law. Insurance is where we would pre-emptively expect to insure the products; however, insurance would not answer for products not normally insured.
Legal personality- Vicarious and Criminal Liability
It is still disputed to what extent an AI should potentially be given personhood, for as we know from Hohfeldian analysis of rights and duty- one must always be matched by a claim about the other. If given a legal personality, it could be held liable for its actions similar to a company or an individual. On the other hand, vicarious liability would open avenues for liability towards the employer (the company manufacturing said product). Should the AI take the form of a legal person, it would further bring to question how criminal liability is imposed on an unemotional machine that acted out of bona fide hence no mens rea. While a company is still a conglomerate of people, a machine is a conglomerate of algorithms, user data, and a shell. A company when held liable will further the liability onto its shareholders or directors or some such, however the same can’t be done to a legally recognized AI. This goes back to the very concept of theories of punishment and the question, why do we punish.
This can be attributed as an extension of the negligence principle, wherein there is no need to prove negligence or intent. It may be appropriate that some forms of AI could have similar legal framework put in place, with a strict interpretation.[vi]
However the case may be, none of this will come to pass unless the bills that we pass incorporate some of this prospective thinking into their provisions.
[i] Susan C. Morse, Automated Law Liability, 110 Proceedings. Annual Conference on Taxation and Minutes of the Annual Meeting of the National Tax Association 110 1, 1 (2017).
[ii] Bernard Marr, The 10 Best Examples Of How AI Is Already Used In Our Everyday Life, Forbes (Dec. 16, 2019,12:13am EST), https://www.forbes.com/sites/bernardmarr/2019/12/16/the-10-best-examples-of-how-ai-is-already-used-in-our-everyday-life/#cb32c391171f.
[iii] Andreas Kaplan & Michael Haenlein, Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence, ScienceDirect (Jan-Feb, 2019), https://www.sciencedirect.com/science/article/pii/S0007681318301393?via%3Dihub.
[iv]  AC 562.
[v] Carlos E. Perez, Artificial Personhood is the Root Cause Why A.I. is Dangerous to Society, Medium (Mar. 22, 2018), https://medium.com/intuitionmachine/the-dangers-of-artificial-intelligence-is-unavoidable-due-to-flaws-of-human-civilization-f9c131e65e5e.
[vi] Emily Barwell, Legal Liability Options for Artificial Intelligence, Lexology (Oct. 16, 2018), https://www.lexology.com/library/detail.aspx?g=6c014d78-7f4c-4595-a977-ddecaa3a12e4
Author: Rebecca Mishra