ijalr

Trending: Call for Papers Volume 5 | Issue 1: International Journal of Advanced Legal Research [ISSN: 2582-7340]

PERSONAL BIAS AND ARTIFICIAL INTELLIGENCE APPLICATION IN ADMINISTRATIVE DECISION-MAKING – Shivanshi Singh

ABSTRACT

Ethical difficulties surrounding natural justice in administrative law arise whenever decision-makers are biased, fail to fulfil procedural fairness requirements, have conflicts of interest, are not accountable for their choices, or discriminate against particular individuals or groups. Artificial Intelligence powered decision-making can have procedural impropriety due to the personal bias of the authorized coder. The validity and impartiality of the administrative law system can be called into question as a result of these rising concerns with advancing technology, which can also diminish public trust in the institutions of government.

KEYWORDS: Artificial Intelligence, Personal Bias, Decision Making, Ethical AI

INTRODUCTION

“Technology is relevant in so far as it fosters efficiency, transparency, and objectivity in public government. AI is present to provide a facilitative tool to judges in order to recheck or evaluate the work, the process, and the judgments

– Chief Justice of India D.Y. Chandrachud

The practice of administrative law is based on a fundamental principle known as natural justice, which ensures that decisions are made in a fair and impartial manner. It is necessary for those making decisions to behave in a just and unbiased manner, providing an opportunity for each party to argue their position before reaching a conclusion. Natural justice must be respected a tall times; when it is violated in any way, or when it is misunderstood or misapplied, ethical problems may result. In administrative law, the following are some ethical concerns pertaining to natural justice:

To avoid being influenced by any bias, those making decisions should be impartial and unbiased. The validity of the decision-making process can be called into question if there is any bias, real or perceived, and the public’s trust in the system can suffer as a result. When decision-makers have individual interests or attachments that could potentially impair their ability to make impartial decisions, this gives rise to ethical difficulties.

Natural justice necessitates that the decision-making process be both just and open to public inspection, which is known as procedural fairness. This includes giving both parties advance notice of the hearing or decision, the opportunity to have their side of the story heard, access to any relevant material, and a decision that is supported by an explanation. When these procedural prerequisites are not met, along with other conditions, such as when the decision-making process is opaque or arbitrary, ethical problems occur. With the advent of technology, as Artificial intelligence becomes go-to tool for users, The application of AI in the decision-making processes of the government can be an effective method to analyze and assess, but it can also lead to increased concerns around the ethical use and repercussions of misuse of the technology. When decision-makers make decisions based on their own personal biases or preconceptions rather than on the case’s true merits, this might give rise to ethical problems. This research paper analyses the modern difficulties around application of AI in decision making and how it gets influenced by Personal bias.

RESEARCH QUESTIONS

  • What are the ethical difficulties surrounding natural justice in administrative law when decision-making is powered by AI?
  • What are the potential modern challenges to administrative processes in India?

REVIEW OF LITERATURE

Barocas, Solon, Andrew D.Selbst (2016)[1] Inspite of the abundance of evidence demonstrating the existence of bias in algorithmic decision-making in the United States, there are only a limited number of legislative safeguards in place to protect individuals from its adverse impacts.

Bavitz, C., Holland, A., & Nishi, A. (2019)[2] The legal framework that is now in place in the United States with regard to artificial intelligence and robots has seen significant development over the past 10 years, but there is still much need for improvement in this area. Increased collaboration between industry leaders, academics, non-profit organizations, and government entities can ensure future legal protections for innovators and the people at large. Although it may be challenging for law makers to foresee the legal implications of cutting-edge technologies, such protections can be achieved through increased collaboration. Accountability in online analytical should become a national goal as the pace of the development and adoption of AI systems quickens. This will strengthen accountability and make it easier to enforce other rights.

PERSONAL BIAS IN ADMINISTRATIVE LAW

Personal bias can have a substantial bearing on administrative law because it can influence both the decision-making procedure of administrative authorities and the consequences of the choices those agencies reach. Authorities are government bodies that are tasked with the responsibility of enforcing and upholding laws and regulations in a certain area, such as the preservation of the environment, the standards of labour, or immigration.

Personal biases can cloud the judgement of administrative authorities and lead to conclusions that are not simply based on the information that is offered to them. This is despite the fact that administrative agencies being mandated to make decisions that are grounded in law and the facts presented to them. Personal biases can present itself in a variety of ways, including stereo typical ideas, preconceived notions, and political convictions.

A bias towards a specific group of people, for instance, could lead to discriminatory treatment at the hands of an agency that is responsible for implementing laws that have an effect on that group.

Personal prejudice is another factor that can influence whether or not administrative proceedings are fair and unbiased[3]. Individuals have the right to be provided with due process by administrative authorities, which includes the right to a fair hearing and a decision that is based on the evidence that was presented during the hearing.

Administrative agencies typically have policies and processes in place to ensure that decisions are made in a manner that is unbiased, which helps to reduce the impact of personal prejudice[4]. Rules pertaining to conflicts of interest, guidelines for recusal, and training for agency workers on how to recognize and prevent personal biases are examples of some of the things that could fall under this category. In addition, decisions that are made by administrative agencies are subject to judicial review, which can serve as a check on choices that are driven by private bias rather than the law and the facts that have been given.

MODERN       CHALLENGES    IN    GOVERNANCE    OF    NATURAL    JUSTICE IN ADMINISTRATIVE LAW

An essential component of administrative law is the concept of natural justice, which is synonymous with the concept of procedural fairness. When making decisions that could have a positive or negative impact on individuals or groups, decision-makers are obligated to act fairly and impartially. Nonetheless, natural justice is being tested in current times by a number of different issues in administrative law. Among these difficulties are the following: Administrative decision by algorithm or artificial intelligence: With the growing adoption of technology, there is a high chance that decision-making may be assigned to algorithms or artificial intelligence without appropriate human oversight. This presents a challenge for administrative decision-makers.  This raises questions regarding the applicability of natural justice concepts to situations of this nature.

The desire to preserve fairness and natural justice frequently competes with the need to make decisions in a timely manner. This frequently creates a conflict between the two competing values of expediency and fairness. This can be a particularly difficult task in circumstances in which there is a backlog of cases or in which decisions need to be made as quickly as possible. Biases and conflicts of interest: The ability of decision-makers to behave fairly and impartially may be hampered by the existence of biases or conflicts of interest in their personal lives. It is not always easy to recognise and address problems of this nature, and this is especially true in situations where there is a lack of openness or accountability.

Access to justice: When individuals believe that their rights have been infringed, it can be difficult for them to seek remedy since the cost and complexity of legal proceedings might make it difficult for them to do so. This may be an especially difficult challenge for communities that are already marginalized or disadvantaged.

Problems on the international level: As a result of the rising globalization of trade and the mobility of people, there are occasions in which choices made by administrative bodies may have consequences on the international level. Because of this, it may be challenging to apply the principles of natural justice in a manner that is both consistent and effective.

It is necessary to ensure that decision-makers receive training in natural justice concepts, that there are proper oversight and accountability mechanisms in place, and that there is adequate access to justice for all individuals and groups in order to address these difficulties. In addition, in order to sustain the principles of natural justice in a society that is always changing, it is possible that new methods and technology will be required.

ADMINISTRATIVE DECISION-MAKING POWERED BY ARTIFICIAL INTELLIGENCE

Administrative decision-making that is powered by algorithms or artificial intelligence (AI) is becoming more widespread in a variety of industries, including the healthcare industry, the government, and the financial industry. It is possible to automate and streamline decision-making processes with the help of AI and algorithms, which can also enhance efficiency and decrease human bias.

On the other hand, the utilisation of algorithms and AI in the administrative decision-making process has been met with some scepticism. One cause for concern is the possibility of algorithmic bias, which occurs when a computer programme is designed in such a way that it favours specific groups of people because of their ethnicity, gender, or other traits. This can lead to outcomes that are unfair and can perpetuated is parities that already exist in society.

A further source of concern is the absence of openness and accountability in the decision-making process that involves algorithms. Because of the complexity of some algorithms, it may be challenging for individuals to comprehend the process by which decisions are formed or to contest decisions that they consider to be unreasonable or erroneous. This can also lead to problems over who is accountable for the decisions that are made by algorithms.

In order to address these concerns, it is essential to guarantee that AI and algorithms are developed and deployed in a manner that is both ethical and open to public scrutiny. This could involve incorporating ethical issues into the design of algorithms, such as making sure that the data that is used to train algorithms is varied and representative of its target population. It may also involve the establishment of transparent methods through which individuals can contest judgements made by algorithms and the holding of companies accountable for the decisions made by their own algorithms. As AI technologies develop, administrative agencies may increasingly rely on AI tools to help them draft, implement, and enforce delegated legislation. AI tools could be used to analyzed and identify patterns that can inform the creation of new regulations or the modification of existing ones. AI could also help agencies to automate the process of drafting regulations, potentially reducing the time and cost involved in creating delegated legislation. Additionally, AI could be used to help agencies monitor compliance with regulations, by automatically analyzing data and identifying potential violations.

However, the use of AI in delegated legislation also raises important questions and concerns, there may be questions about the reliability and accuracy of AI algorithms used to create regulations, or concerns about the potential for AI to reinforce existing biases or discrimination.

Additionally, there may be questions about the appropriate level of human oversight and control over AI-assisted delegated legislation, and whether the use of AI could lead to a loss of democratic accountability or transparency.

PERSONAL BIAS AND ARTIFICIAL INTELLIGENCE

In the case of Hot Holdings v. Creasy[5], the majority opinion of the High Court stressed that a final judgement is not necessarily altered even if the information sources to the decision maker have interests in the decision.

However, the implementation of artificial intelligence in administrative law can face considerable challenges, one of which being personal prejudice. Artificial intelligence systems are only as objective as the data they are trained on; therefore, if the data used for training is biased or inadequate, the AI system will also be prejudiced.

Furthermore, the personal biases of the personnel responsible for building and implementing AI systems might also have an effect on the results. For instance, if a person has a prejudiced perspective of what constitutes acceptable behaviour, then they may educate the AI system to discriminate against particular categories of individuals based on their knowledge.

It is crucial to guarantee that the training data is representative and unbiased if one wants to reduce the impact of personal bias on AI in administrative law. In addition, artificial intelligence (AI) systems should be built to be transparent. This would ensure that it is obvious how decisions are being made and that there is responsibility for any biases that are discovered. In addition to this, it is crucial to make certain that human oversight is in place to review the decisions that are made by the AI systems and to intervene if necessary. It is possible that this will help to reduce the influence of the individual biases of those responsible for the design and implementation of AI systems.

CHALLENGING AI-BASED ADMINISTRATIVE JUDGEMENTS IN INDIA

In India, administrative judgements generated by algorithms can be challenged on several legal grounds, such as:

Violation of Basic Rights: If an algorithmic decision (AI based Decision) infringes any fundamental rights granted under the Indian Constitution, such as the right to equality, right to life, and individual liberty, it can be contested in court.

Lack of Transparency: If the algorithm employed in making the decision is not transparent and there is no way to discern how the result was arrived at, it can be challenged on the grounds of lack of openness.

Bias and Discrimination: If the algorithmic judgement is biased or discriminates against a certain group of people based on their race, religion, gender, caste, etc., it might be challenged as discriminatory.

Violation of Statutory Provisions: If the algorithmic decision violates any statutory laws or regulations, it might be challenged on the grounds of illegality.

Lack of Human Oversight: If the algorithmic judgement is made without any human review or intervention, it might be objected to because of lack of accountability.

Error or Inaccuracy: If the algorithmic judgement is based on erroneous or incomplete data, or if there is a technical problem in the algorithm, it might be challenged on the basis of error or in accuracy.

Lack of Sufficient Explanation: If the algorithmic judgement is not adequately explained to the affected individual or if they are not given an opportunity to be heard, it can be contested on the theory of breach of principles of natural justice.

CONCLUSION

Governmental agencies that use AI-based systems to make or provide decision support for significant and important decisions about individuals should take extra precautions to guarantee the efficacy and fairness of those systems, based on credible proof verification and validation. The impact of AI on delegated legislation in administrative law will depend on a range of factors, including the development and adoption of AI technologies, the legal and regulatory frameworks governing administrative agencies, and broader societal and political debates around the use of AI in public policy.

In India, there is no such law that governs the application of AI and newer technologies pouring in. Decision-making that is powered by AI has a significant obstacle in the form of Explainability. How did an artificial intelligence arrive at the conclusion that it did with the data that it was given? Data given might have been altered through procedural impropriety which exists without any doubt. When does serving different people differently qualify as bias or discrimination, and when does it assure acceptable types of personalization? This is not a straight forward yes-or-no question, but there is a need to understand that technology cannot be expected to be free from all kinds of biases since the source of Data can be challenged.

Its necessary to delve into framing and proposing the legislation of advancing technology to keep a check on the liability and the accountability of data used. In summary, administrative judgements made by algorithms can be challenged on numerous legal grounds in India, and it is necessary to ensure that the application of algorithms in decision-making is transparent, responsible, and complies with legal and ethical principles.

BIBLIOGRAPHY

  1. JOURNALS/ARTICLES
    • Barocas, Solon And Andrew D. Selbst, “Big Data’s Disparate Impact”, California Law Review, Vol. 104, Issue 3, Pp. 671-732, 2016.
    • Bavitz, C., Holland, A., & Nishi, A. (2019). Ethics And Governance Of AI And Robotics.
    • “Fair and Equitable Treatment-a Sequel” [2013] United Nations Conference on Trade and Development (UNCTAD) Series on Issues in International Investment Agreements II
  2. BOOKS
    • CK Takwani Administrative Law, Seventh Edition
  3. CASELAWS
    • V. Bellarmin vs Mr. V. Santhakumaran Nair, Madras High Court (2015)
    • Hot Holdings Pty Ltd v Creasy [2002]HCA 51,210 CLR 438

[1]Barocas, Solon And Andrew D.Selbst, “Big Data’s Disparate Impact”, California Law Review, Vol. 104, Issue 3, Pp.671-732,2016.

[2] Bavitz,C.,Holland,A.,& Nishi,A.(2019).Ethics And Governance Of AI And Robotics.

[3] “Fair and Equitable Treatment – a Sequel” [2013] United Nations Conference on Trade and Development (UNCTAD) Series on Issues in International Investment Agreements II

[4] A.V. Bellarmin vs Mr. V. Santhakumaran Nair, Madras High Court (2015)

[5] Hot Holdings Pty Ltd v. Creasy[2002] HCA 51, 210 CLR 438