ijalr

Trending: Call for Papers Volume 4 | Issue 3: International Journal of Advanced Legal Research [ISSN: 2582-7340]

REGULATING ARTIFICIAL INTELLIGENCE SYSTEMS – Megha Phadkay

Abstract

This article delves into the challenges of regulating artificial intelligence (AI) systems and proposes three distinct methods to address this issue. It begins by defining AI and highlighting the importance of regulation due to potential threats posed by autonomous systems. The first method discussed involves establishing a regulatory authority responsible for overseeing and licensing AI systems. This authority would encompass a multidisciplinary team and would be entrusted with licensing criteria, legislative development, and prediction of industry trends. The second approach suggests the creation of independent regulators specific to different sectors such as defense, healthcare, finance, and more. These sector-specific regulators would work alongside existing industry regulators to ensure comprehensive oversight of AI applications within their respective domains. Lastly, the article explores the concept of granting legal personhood to AI systems, drawing parallels to legal categories governing human beings. The advantages and disadvantages of each method are analyzed, highlighting the need for checks and balances, accountability, and ethical considerations. Ultimately, the article proposes a harmonized approach that combines a central regulatory authority with independent regulators for individual sectors, allowing for efficient regulation while addressing sector-specific nuances. This approach seeks to mitigate the disadvantages of each method and foster a comprehensive regulatory framework for AI systems.

Keywords: Artificial intelligence, Regulation, Legal Personhood, Licensing AI Systems Legislation

Introduction

This article attempts to solve the problem of regulating the current and future AI systems. It proposes three different methods to do the same and analyzes each in detail.

Definition of AI

The difficulty in defining artificial intelligence lies not in the concept of artificiality but rather in the conceptual ambiguity of intelligence. Because humans are the only entities that are universally recognized (at least among humans) as possessing intelligence, it is hardly surprising that definitions of intelligence tend to be tied to human characteristics[1]

McCarthy defined intelligence as “the computational part of the ability to achieve goals in the world” and AI as “the science and engineering of making intelligent machines, especially intelligent computer programs.[2]

An AI system includes both hardware and software components. It thus may refer to a robot, a program running on a single computer, a program run on networked computers, or any other set of components that hosts an AI.[3]

Importance of regulating AI

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that . . . . I’m increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

– Elon Musk (during an interview at the Aero-Astro Centennial Symposium.)

In 2017, Alice and Bob, two bots known as dialogue agents were kept in the Facebook Artificial Intelligence Research (FAIR) lab and left to converse freely in order to strengthen their conversation skills. It was found that after some time, these two were conversing in a language they had made up themselves.

Although it seems like a one-off situation, this is not a novel occurrence. It is one of the many incidents in which we have seen AI systems go rogue. This calls our attention to the fact that such occurrences might increase in frequency and intensity as AI systems advance further. To prevent such occurrences from becoming a serious threat to public welfare, there needs to be a system in place to regulate the actions of such AI systems.

Methods to regulate

  1. REGULATORY AUTHORITY

Legislatures are constituted of members of the general public and leaving the regulation of any field, let alone a field as varied and omnipresent as AI, to a group of people, who are not experts in the field in question, can be considered a serious fallacy. We see the presence of regulatory bodies in almost every sphere of industry from Tele-Com (TRAI) to aviation (DGCA).Thus, we might consider a regulatory authority as one of the viable ways of regulating AI systems.

Agencies combine a legislature’s ability to set policy, a court’s ability to dispose of competing claims, and the executive’s ability to enforce decisions.[4]

Functions

The functions of such an agency would be multi-various and fluid in nature, it would be impossible and illogical to make a list of functions to be set in stone. Yet, we might look at a few major functions to get a crude idea of the role of such an authority.

  1. Licensing:

Since AI could be considered a product -when it is licensed out by the companies as software or as a service -when it is provided as virtual assistants and AI based chat-bots. Therefore, it becomes crucial that the regulatory authority not be rigid in the criteria for licensing and AI as a product and AI as a service be considered separate during licensing.

AI also has varied levels of ‘ threat’ to public at large, an AI system employed in assisting online shoppers would not be considered as harmful as an autonomous weapon system. AI also have different levels of autonomy, autonomous vehicles are certainly more self reliant than an AI chat-bot which has to work within a framework to provide assistance and solve issues of consumers.

  1. Legislation

For the regulatory authority to be able to license AI systems, there needs to be a set of guiding principles on the parameters for licensing. This would be in the form of a legislation which is prepared by a committee which is formed on either a permanent or an ad-hoc basis. The legislation would also require to be updated frequently owing to the rapid degree of development in this field. The committee would have the power to form a sub-committee to analyze the legislation of various countries and adopt various measures to improve the current legislation. This committee would also define the methods and the quantum of punishment to be given. It would also investigate into whether the AI or the developer would be held liable, this would be dependent on the various situations that may arise.

  1. Prediction of trends and Research & Development

The advancement of AI systems is currently in the nascent stage at best and would certainly improve and complicate over the years. This calls for a separate wing to be established not to analyze the developments in the field and to establish a research lab, attempting to stay ahead of the curve. This wing would also have the function of making predictions of the industry trends and red-flagging any threats or potential threats.

Structure

Since the field of AI is dynamic and varied, it would be arduous to regulate with a rigid structure to the regulatory authority, this authority would need the flexibility to form new committees or sub-committees on an ad-hoc basis. The basic wings of the committee based on the functions given above would be:

  1. The Bench

This bench would consist of Judges of the Supreme Court of India and experts in the field of AI. To keep the balance between judicial experience and expertise and technical know-how, the bench would be divided equally between Judges and technical experts with an extra member of the parliament to represent the legislature.This bench would have to be of a strength so as to strike a balance between it being small enough to be despotic and large enough to be inefficient.

The functions would not be only to create a legislation regarding how AI should be regulated andlicensed, but also to appoint the members of the R&D department and the committee of experts.

  1. The Committee of Experts.

This committee would consist of experts from applied AI fields like Healthcare, Defense, Automobile, Finance, Space Exploration etc. The functions of this committee would be to supplement the Bench. They would provide information on how the upcoming developments in AI affect their particular fields, how the perceived threats are different in their particular field, for example, an identification bug in the software of an autonomous weapon system would be more devastating than in the software of the healthcare database. This committee would function under the Bench with the Bench having a higher say in matters.

  1. Research & Development Wing

This wing would serve to keep the Bench up to date with the current trends, possible threats and developments in AI.This would also serve to further the advancement and development of AI systems in general. The AI systems created by this wing would be made available to the government machinery to improve administrative efficiency and to reduce running costs and simplify and accelerate non-creative tasks.

Appointments

The appointment of the judges on the Bench would not be based on experience rather than seniority. The experts of the Bench would be appointed through an electoral college system (as in the case of the Vice President), on the basis of their technical expertise. The members of the Committee of experts would be appointed by the Bench and would be subject to removal by the Bench. Every member of the Bench, would be subject to impeachment by legislature. The members of the R&D branch would be salaried employees appointed on the basis of merit.

Advantages

Since the R&D wing works to keep the regulatory body up to date with the advancements, the legislation would be updated to handle latest AI systems. This leads to efficient regulation without redundant rules plaguing the possible uses of such systems. Since this authority is government funded and managed, and also as it is accountable to the judiciary and legislature, it is free from interference from private entities and industries up-to a large extent.

Disadvantages

The appointment and impeachment procedures are too lengthy, this is also true in decision making as the decision making powers are vested in multiple equally powerful people. Therefore, it will surely lead to dead locks and internal politics. Since AI as a field overlaps with a myriad of other fields, AI is ubiquitous and if someone has the authority to control AI, they indirectly have authority to control the lives of citizens. Therefore, the regulatory authority could have a possibility of becoming despotic and take undue advantage of these powers.

Checks and Balances

To control the Regulatory Authority from misusing and abusing power, it needs to be accountable. The Bench would be accountable to the legislature and would be subject to impeachment in the case of misuse of power. The legislation would have to be ratified by the legislature and also be under the purview of judicial review. The factor of transparency makes it difficult for the officials to shun responsibility for wrong-doings which will deter them from committing such wrongs. The authority will also be responsible if any certified AI systems go rogue. This would increase thoroughness in the certification process and prevent rush jobs and loopholes-which might be exploited by anti-social elements.

  1. INDEPENDENT REGULATORS

One of the major disadvantages of a central regulatory authority is that it creates legislation for AI as a whole but ignores the intricacies and the varied requirements of different fields. It is too broad a brush to be able to tackle the minute issues unique to each sector. For example, the margin for error in Defense or Healthcare would be much finer than that in say, AI based assistants and algorithms for online shopping. Hence, there is a need to set up individual regulatory authorities for each sector like Defense, Automobile, Healthcare, Finance etc.

Structure

Largely the structure of these regulatory authorities would be similar to the Central Regulatory Authority with some minute differences.

Each field would have a regulatory authority for AI working in harmony with the pre-existing regulator of the industry as a whole. This (regulator for AI) would consist of a panel of experts from the concerned field, experts in AI and judges of the Supreme Court. The experts of the field concerned will have an equal say in matters unlike the expert committee as in the Central Regulatory Authority.

Advantages

The advantages of this method correlate with the advantages of a Central Regulator with the added advantage that no aspect, however minor, of the field in concern is left out.

Disadvantages

AI finds its application in new fields as it develops. Hence, it would not be possible to set up a regulatory authority for every upcoming field. As a result, many new fields might be left out. Currently, there are many fields in which AI finds its application, but are too minor to warrant a regulatory authority. Such fields might be left with no regulation at all.

  1. LEGAL PERSON-HOOD

Another way to tackle the problem of regulation is to grant person-hood to these systems. This approach springs up from the notion that human intelligence can be made synonymous to a computer program. But before extrapolating all the laws to cover AI, we need to define a ‘person’ under the law.

According to the Cambridge English Dictionary, ‘person’ is a company that has full legal rights and responsibilities according to the law.[5]There are various legal categories within a legal person like product liability, dangerous animals, slavery, diminished capacity, children, agency and person. (from the least to the most conscious).

Implementation of Legal Personhood to AI

Person-hood to robots can be better understood by taking a look at a similar situation in Ancient Rome. Women and children had a lesser status as compared to men (pater familias), while Foreigners (peregrines) didn’t have legal protection under the Roman law at all. The laws governing the slaves were different than those governing the citizens. Slaves were considered as inferior human beings, as their only purpose was to serve the People (as decided by the ‘forward thinking’ Romans ). They could conduct business/transaction on the behalf of their master, but it was not fair to hold the master liable for all the actions of the slave, as sometimes the slave (possessing an intellect of its own) may not obey master’s every command. Hence the law was more willing to impose duties on them than rights. This can be used in the case of AI.

Advantages

This method will save the efforts of making new laws and committees for AI. Not only the manpower, but also the resources like infrastructure, funds, and time will be saved. This approach will guarantee that the regulations will be kept abreast with all the latest technological advancements, and there will be no additional requirement of making alterations to the laws, as they govern not only the AI systems, but all the legal entities. Also, no special overseeing authority is required to be set up, hence the regulation will be free from the domination of a handful of people.

Disadvantages

The absence of a singular overseeing authority can be a disadvantage too, as no one will be held responsible and accountable for the application of laws particularly in this field. This method a curative measure instead of a preventive one. Another major disadvantage is the lack of a sound surveillance system. Given the fact AI can be developed virtually sitting at any place on the globe, it is almost impossible to keep tabs of every robot/machine ever manufactured. This fallacy can be avoided by making it a pre-requisite for the owners to have registered license for their AI software/robot, to enter into the market and carry out business. Another major disadvantage would be that every AI system no matter its capability would be considered as a legal person, this brings forth the issue of super-advanced AI systems finding loopholes in law which otherwise are not apparent.

LEGAL ISSUES

AI lacks a very fundamental aspect of humans- which is morality and how can the robot be responsible for a crime if the most fundamental criminal element, i.e. the mens rea (motive), is absent?

Also, these machines lack some essential elements of a person like intentions, consciousness, guilt, motive, conscience etc. As a child below the age of 10 years is incapable of committing a crime, so is a machine. (doli incapax). In the case of a minor, or an employee, the responsibility of the act is borne by guardian or the employer respectively. Similarly, the question in case of a machine is, whether the manufacturer bears the responsibility or the operator.

In case of automobiles, the operator is guilty, and not the manufacturer. In the case of intelligent machines, there should be change of liability from the operator to the manufacturer. The motive of artificial general intelligence is to make the robots acquire new skills without explicit instructions. Hence the risks are unforeseeable when the program is written. In this case how can the manufacture be liable?

There is also a technico-legal fallacy. A person starts two limited liability companies, and takes up an independent autonomous or artificially intelligent system as a partner. He adds each company as a member of the other company, and lastly, resigns from both of them. This results n a scenario where each LLC- a corporate legal entity with legal person-hood- is governed only by the other’s AI system.[6]

Hence an AI can potentially exploit workers (tampering the algorithms for price goods), manage investment, manipulate the business process, alter the stock markets, hack into public social networks, and even run for the Prime Minister!

  1. PRACTICAL ISSUES

As the intellect of the AI systems advances, problem arises when the AI machines become conscious of their legal status, such as a slave. This will result in them being manipulative and may lead to revolts.

  1. ETHICAL ISSUES

Considering the level of self-awareness, autonomy and self-determination, we may seek an analogy between robots and animals. But what makes people eager to provide legal protection to animals (and attempt to vest them with person-hood) is not just the intelligence some of them display, but also their capacity to feel pain, joy or attachment, which AI lacks.[7]

HARMONY BETWEEN METHODS

We have seen the disadvantages of a Central Regulatory Authority and Independent Regulators, individually, they might not be able to best serve the purpose of regulating AI systems. Therefore, a method which has a central regulatory body and independent regulators for each field, working in synergy, would be ideal.

Such an approach will also reduce some of the disadvantages of each method, like upcoming fields being left out in Independent Regulators or a Central Regulatory Authority not being able to deal with intricacies of each field. In this method, the Central Regulator would make a skeleton of guidelines which would be elaborated and adapted to their particular field by a bench which would act as a set of guidelines for that particular field. This bench would consist of experts from that field and legal experts. The function of this bench would be to lay down the regulations.

The Central Authority would have the power to resolve disputes regarding AI systems and they would resolve it based on the guidelines of the Independent Regulator of the particular field.

[1] Matthew U. Scherer, REGULATING ARTIFICIAL INTELLIGENCE SYSTEMS: RISKS, CHALLENGES, COMPETENCIES,

AND STRATEGIES, Volume 29, Number 2 Spring 2016 Harvard Journal of Law & Technology 354,359 (2016).

[2] WHAT IS ARTIFICIAL INTELLIGENCE? John McCarthy, available at http://www-

formal.stanford.edu/jmc/whatisai/whatisai.html , last seen on 31/6/2023

[3] Supra 1, at 362

[4] Supra 1, at 382

[5] Available at https://dictionary.cambridge.org/dictionary/english/legal-person , last seen on 30/6/2023

[6] Roman V. Yampolskiy (5/10/2018) Could an artificial intelligence be considered a person under the law? Available athttp://theconversation.com/could-an-artificial-intelligence-be-considered-a-person-under-the-law-102865 , last seen on31/6/2023.

[7] Agnieszka Krainska (02/07/2019) Legal personality and artificial intelligence. Available at https://newtech.law/en/legal-personality-and-artificial-intelligence/  last seen on 31/6/2023