ijalr

Trending: Call for Papers Volume 4 | Issue 3: International Journal of Advanced Legal Research [ISSN: 2582-7340]

GUARDING THE SCALES OF JUSTICE: ASSESSING THE HUMAN RIGHTS IMPLICATIONS OF AI TECHNOLOGIES IN CRIMINAL JUSTICE SYSTEMS IN INDIA – Jakey Khan

ABSTRACT

Artificial intelligence (AI) technology integration in criminal justice systems has the potential to improve decision-making, reduce bias, and streamline procedures. In India, as AI applications find increasing use in law enforcement, judicial proceedings, and corrections, it is imperative to critically examine their potential human rights implications. In the context of Indian criminal justice, this research study explores the complex link between AI technologies and human rights. This research paper analyses the consequences of human rights and identifies important areas of concern. It is revealed how data bias can result in discriminatory outcomes and disproportionately affect marginalized people as the question of fairness and prejudice is investigated. The right to a fair trial is also examined, paying particular emphasis to AI’s role in reviewing the evidence and posing procedural difficulties. Regarding accountability, it is also looked at how opaque AI algorithms are and how difficult it is to assign blame. Furthermore, the possibility of massive surveillance and data breaches raises privacy and data protection issues. Taking into account current laws and regulations, the legal and ethical framework for AI in Indian criminal justice is assessed. The implementation of AI also emphasizes ethical considerations. The study gives concrete examples of AI technology in Indian criminal justice systems and their impact on human rights. These instances offer light on the intricacies and difficulties involved in striking a balance between the protection of basic rights and the growth of technology. The study suggests a set of precautions and guidelines to reduce the threats to human rights that AI in criminal justice poses in light of the findings. The significance of fairness and bias reduction, as well as the requirement for transparent and explicable AI systems, are emphasized. The researcher recommends human rights impact assessments and strengthening accountability structures as ways to encourage responsible AI use. In light of AI’s swift integration into Indian criminal justice systems, this research paper’s conclusion emphasizes the need to vigilantly protect the justice system’s balance. The ideas presented here support ethical AI development and informed policymaking, ensuring that technological breakthroughs respect fundamental human rights and promote a just and equitable society.

Keywords:

Artificial Intelligence, Algorithms, Criminal Justice, Human Rights, Surveillance, Privacy, Ethics.

INTRODUCTION:

Former British Prime Minister William Ewart Gladstone once observed that “justice delayed is justice denied.” Since then, jurists have worked nonstop to make amends for this and have done so by using a variety of strategies, one of which being technology. The Prime Minister of India Narendra Modi has time and again urged the integration of Artificial Intelligence (AI) in Courts for smoother justice.[1]According to former Law Minister Kiren Rijiju, there were approximately 4.32 crore cases still pending in district and subordinate courts as of December 31, 2022. He added that there are more than 69,000 cases pending at the Supreme Court and more than 59 lakh cases in the backlog at the 25 High Courts across the nation. Mr. Rijiju said that as of February 1, 2023, there were 69,511 cases still outstanding in the Supreme Court, using information that was posted on the court’s website.[2]The administration has taken several steps to create a “suitable environment” for the judiciary to quickly resolve disputes. The administration is interested in integrating artificial intelligence into courts and the wider criminal justice system as a result of technological breakthroughs, particularly those involving Artificial Intelligence (AI).

In India, there is a significant backlog of cases that need to be resolved by the courts as well as a high number of pending cases. When the pandemic struck the nation in 2020, the situation got worse. However, the judicial system embraced technological advancements in order to modernize and digitize court operations during COVID-19.[3]This adoption of technology and its integration in the criminal justice system has been accelerated due to the pandemic which had paved the way for integration of artificial intelligence (AI) in criminal justice system. Although the criminal justice system is not totally dependent on AI and the process is not yet fully automated but looking at the pendency of the cases very soon this technology will be given autonomy in deciding cases. Numerous industries have been transformed by artificial intelligence (AI) technologies, and India’s criminal justice system is no exception. There is considerable enthusiasm for the potential advantages AI applications could provide, such as improved efficiency and less human biases, as they continue to penetrate law enforcement, judicial procedures, and incarceration. Despite these assurances, it is necessary to critically assess how AI will affect human rights in the context of criminal justice.

ARTIFICIAL INTELLIGENCE:

There are many definitions of artificial intelligence (AI), but John McCarthy provides the following one in his 2004 paper: “It is the science and engineering of constructing intelligent devices, especially intelligent computer programs. Although it is related to the related job of utilizing computers to comprehend human intellect, AI should not be limited to techniques that can be observed by biological means.[4]Artificial intelligence (AI) is the capacity of a digital computer or robot operated by a computer to carry out actions frequently performed by intelligent beings. The phrase is widely used in reference to the effort to create artificial intelligence (AI) systems that possess human-like cognitive abilities like the capacity for reasoning, meaning-finding, generalization, and experience-based learning.[5]

Therefore, the term artificial intelligence (AI) can be understood as a branch of computer science where algorithms are created that can mimic or simulate human intelligence. Artificial Intelligence (AI) are algorithms which have the capability of learning, planning, decision making etc., that are tasks which typically require human intelligence. In the last decade the potential of AI has been realized and significant research and development has been carried out by governments as well as corporations which has resulted in significant breakthroughs in the field of artificial intelligence. From machine learning and computer vision technology the field of artificial intelligence has been upgraded to include Artificial Generative Intelligence (AGI) as well as Natural Language Processing (NLP). However, a fully sentient artificial intelligence like Artificial General Intelligence is in the distant future but considering the advancements made in this field such fantasy can also be achieved one day.

ARTIFICIAL INTELLIGENCE (AI) IN CRIMINAL JUSTICE SYSTEMS:

The use of Artificial Intelligence (AI) technologies in criminal justice systems has the potential to improve efficiency, decision-making, and resource allocation. Applications of artificial intelligence (AI) have been widely used in the criminal justice system’s different components, including law enforcement, courtroom procedures, and incarceration. The goal of integrating AI technologies is to increase decision-making, resource optimization, and operational effectiveness. There has been considerable research and development in AI technologies which has enabled AI to impact criminal justice systems in following areas:

  1. Predictive Policing: One of the primary AI technologies used in the Indian criminal justice system is predictive policing. In order to forecast potential crime hotspots and more effectively and efficiently allocate resources, it includes using sophisticated algorithms to analyze historical crime data and discover patterns. Law enforcement can prevent criminal activity and improve public safety by taking proactive actions by anticipating when and where crimes will likely occur. In the year 2019, Uttar Pradesh Police signed a Memorandum of Understanding with the Indian Institute of Technology, Kanpur for predictive policing.[6]In 2019 when Dr. Himanta Biswa Sharma, the current Chief Minister of Assam, was the then Finance Minister of Minister of Assam, he had in his 2019 budget speech allocated funds for the implementation of predictive policing in Assam.[7]
  2. Risk Assessment Tools: The development of AI in the majority of criminal justice, law and forensics, and forensic psychology fields can be attributed to its predictive nature. The use of algorithmic risk evaluations in law enforcement is very common today. In India’s criminal justice system, risk assessment systems powered by AI may be used to determine how likely it is that a person will commit another crime or miss a court date.[8] These methods assess a number of variables, including a person’s criminal past, socioeconomic status, and demographic data to determine the degree of risk involved in a certain instance. This helps courts make better choices about bail, pretrial detention, and sentencing.
  3. Facial Recognition Technology: In India’s criminal justice system, facial recognition technology has become more prevalent, particularly for law enforcement needs. AI-driven facial recognition systems examine pictures and videos to identify people in databases like criminal histories and missing person lists. This technology helps with suspect identification, missing person searches, and improving surveillance capabilities. In India the application of AI has already established a strong footing in this area, for example, the AI-based face recognition system named ABHED (AI-Based Human Efface Detection) developed by Staqu technologies with the assistance of Punjab and Rajasthan Police; AI-powered equipment introduced by Odisha police to analyze crime data; TRINETRA, an AI-based face recognition app launched by Uttar Pradesh police; E-Pragati database launched by Andhra Pradesh Government; and in collaboration with IIT Delhi, the Delhi Police established an AI Centers to address crimes.[9] Moreover, in order to identify suspects, follow their activities, and recreate events from surveillance footage, law enforcement organizations and detectives use AI video analytics. By automating the process of removing pertinent information from video data, it speeds up investigations.[10]
  4. Sentencing Algorithms: Algorithms based on artificial intelligence for sentencing help judges decide on the proper penalties for criminal offenders. These algorithms take into account a number of variables, such as the gravity of the offence, the accused person’s prior criminal record, and mitigating circumstances. These tools are designed to encourage uniformity and objectivity in sentencing decisions by offering data-driven sentencing suggestions.[11]Countries like the United States of America, China, Canada and Australia have already implemented such algorithms for sentencing purposes.[12]
  5. Natural Language Processing (NLP): Natural Language Processing (NLP) based AI technologies are used for analyzing and extracting important insights from massive amounts of unstructured textual data.[13] To acquire intelligence and aid in the conduct of investigations, these instruments are capable of analyzing legal papers, case laws, police reports, and social media posts.
  6. Virtual Parole Officers: AI-driven systems that help with the supervision and monitoring of parolees and probationers are known as virtual agents or parole officers. These systems enable the management of those making the transition back into society after serving their sentences by sending out reminders for appointments, curfew checks, and behavioural assessments.[14]The trend of virtual officers started during the Covid-19 pandemic in order to contain the spread, but this trend is going to remain due to its potential.

HUMAN RIGHTS CONCERNS IN AI-DRIVEN CRIMINAL JUSTICE SYSTEMS:

A variety of human rights issues are raised by the incorporation of artificial intelligence (AI) technologies into criminal justice systems, which calls for careful examination. Although AI has the potential to improve efficiency and objectivity, it also poses serious threats to fundamental human rights concepts. In this section, the researcher examines important issues related to human rights that may arise from India’s usage of AI in criminal justice systems.

  1. Data Bias: To generate predictions and judgements, AI algorithms use past data. The AI systems may reinforce and amplify biases if the training data utilized to create these algorithms is biased or reflects historical discrimination.[15]As a result, marginalized communities may be disproportionately affected by unfair and discriminating decisions.
  2. Disproportionate Impact: Based on their socioeconomic status, race, or other demographic characteristics, certain groups may be overpoliced and overincarcerated as a result of AI-driven decision-making in areas like predictive policing and risk assessment.[16]Concerns about equal protection and anti-discrimination rights are raised by this.
  3. AI in Evidence Evaluation: The accuracy and dependability of the technology are questioned when AI is used to evaluate evidence, such as fingerprint or DNA analysis.[17]AI systems may result in injustices and violations of the right to a fair trial if they are not adequately verified and have a transparent decision-making process.
  4. Challenges to Due Process: The use of AI techniques for pre-trial risk assessment and punishment may restrict judicial discretion and make it more difficult to take unique circumstances into account.[18]This might violate the rule of sentencing proportionality and affect the right to due process.
  5. Lack of Transparency in AI Algorithms: Criminal justice AI systems frequently function as “black boxes,” which means that their decision-making procedures are not entirely transparent or clear.[19]The right to an explanation and the right to contest judgements may be violated as a result of the difficulty in holding AI systems accountable for their choices.
  6. Challenges in Assigning Responsibility: Assigning blame can be difficult in situations where AI influences choices or acts that violate human rights.[20]A big difficulty is determining responsibility for AI-generated outputs and making sure that the right accountability systems are in place.
  7. Surveillance Implications: Concerns about intrusive and extensive surveillance are raised by the growing usage of face recognition and other surveillance AI technology.[21]Individuals’ rights to privacy and freedom of movement may be violated by this.
  8. Data Security and Privacy: Massive volumes of sensitive personal data are processed by AI systems.[22]Data breaches can result from insufficient data protection measures, putting people at risk of privacy violations and the misuse of their personal information.

PROPOSED SAFEGUARDS

The Code of Criminal Procedure, 1973 is now in place in India’s criminal justice system and must be followed throughout trials. It has proven to be quite successful in ensuring a fair trial is conducted. Cybercrimes are covered by the Information and Technology Act, 2000 (as revised in 2008), which also outlines the offences and procedures for conducting investigations and holding trials. It is crucial to recognize that artificial intelligence is a completely distinct domain that needs its own regulation because, unlike traditional crimes or cybercrimes, where a suspect can be held accountable, courts will be completely at a loss when dealing with machines that have their own intelligence and the autonomy to make decisions that affect the lives of humans. Therefore, the adoption of strong safeguards and rules is crucial to address the ethical issues and human rights implications related to the use of AI technologies in Indian criminal justice. The following actions are suggested in this section by the researcher to guarantee the responsible and rights-based application of AI in the criminal justice system:

  1. Diverse and Representative Data: Artificial Intelligence (AI) can be understood as an infant who if raised correctly will be a good citizen and if raised unethically will be a deviant. As such to reduce the possibility of discriminatory outcomes persisting, the AI training datasets should be varied, representative, and free of past biases.
  2. Bias Audits: It is of utmost importance to regularly audit AI algorithms to find and correct biases. It is also necessary to create systems to identify and correct unfair judgements based on a person’s race, gender, religion, or other protected traits. AI is a system which must be relied upon to get faster and effective result because every other forum and organization is going to use the same, but in doing so, one must not rely on it blindly and keep certain checks on it in order to understand as to whether it is working correctly.
  3. Explainable AI: It is important to promote the creation of explainable AI solutions that will help stakeholders and users understand the thinking behind decisions made by AI, increasing accountability and transparency. Ultimately, AI is given autonomy to take decisions which will impact human lives as such as humans whose lives will be impacted by such decisions deserves to know the reason as to how the AI has reached or decided upon it. This will also lower the effect of black box which is commonly associated with AI.
  4. Algorithmic Transparency: It is necessary to establish precise rules for the openness of AI algorithms employed in the criminal justice system. It is also important that decision-making procedures are transparent and open to review by pertinent authorities. Additionally, it’s important to maintain the AI application’s integrity towards the goal for which it is being used.
  5. Human Oversight: It is necessary to require human oversight at crucial points in the AI decision-making process to avoid relying too heavily on technology and to maintain human judgement in crucial situations. It should be more like appeal systems we have in order to rectify or overrule the decisions or opinions of the lower courts by the higher judiciary.
  6. Impact Assessments: Impact assessment is a critical procedure that assesses the effects and outcomes of putting into practice a certain policy, program, or technology in a specific setting. Impact assessment is a systematic analysis of the effects of adopting AI on different factors, including human rights, social, economic, and ethical issues, in the context of criminal justice. Making educated decisions, recognizing potential hazards, and ensuring ethical and responsible AI system deployment are the objectives.
  7. Anonymization and Consent: To safeguard people’s privacy and identities, personal identifiers are either removed or encrypted from data. Anonymization of data used in predictive policing, risk assessment, or other AI applications in the context of AI-driven criminal justice helps to ensure that private information cannot be linked to identifiable people. This metric aids in avoiding potential prejudices and discriminatory results depending on individual traits. To respect the autonomy of individuals and safeguard their right to privacy, it is also essential to seek their informed consent before using their data for AI applications. Consent guarantees that people are informed about how their data will be used and have the choice to refuse or revoke it at any time.
  8. Secure Data Handling: The security and integrity of data generated by AI must be protected at all costs through secure data handling. Large amounts of sensitive data, such as biometric information, criminal histories, and other personal information, are frequently processed by AI systems. Encryption, access controls, and frequent audits are examples of strong security measures that protect against unauthorized access, data breaches, and cyberattacks. AI-driven criminal justice systems maintain data privacy and protect persons from potential harm by abiding by data protection legislation and industry best practices in data security.
  9. Complement, not Replace: In the criminal justice system, AI should be developed to support human decision-making rather than to replace it. In court procedures, judges, prosecutors, and other legal professionals provide crucial knowledge, empathy, and contextual understanding that AI may lack. In order to preserve the integrity of due process and the right to a fair trial, human judgement is essential for making sure that unique circumstances and mitigating considerations are taken into account in the proper manner. Artificial intelligence (AI) should be used as a decision-supporting tool, offering data-driven insights and supporting evaluations, but ultimately leaving the final decisions to human specialists.
  10. Regular Training: For the appropriate and efficient use of AI technology in criminal justice, ongoing training for judges, attorneys, law enforcement officials, and other stakeholders is crucial. Training program ought to go over the fundamentals of AI, as well as its constraints and potential biases. Training should also emphasize how to understand outputs produced by AI, how to question or confirm AI judgements, and how to combine human judgement with AI recommendations. Professionals will stay updated about the most recent advancements and best practices with the help of regular updates and retraining, ensuring that AI integration remains efficient and morally correct.
  11. Stakeholder Consultation: In order to promote openness, inclusion, and democratic decision-making in the application of AI technology, it is essential to involve stakeholders, such as civil society organizations, human rights activists, academics, and impacted communities. Stakeholders can offer insightful information, point out potential dangers, and express worries about how AI will affect human rights. Achieving greater public trust and accountability and ensuring that AI applications adhere to societal norms and human rights standards are all achieved by incorporating various viewpoints into the design and oversight of AI-driven criminal justice systems.
  12. Independent Oversight: To objectively monitor and assess the effects of AI technology on the criminal justice system, it is essential to establish an independent oversight agency. This oversight committee has the authority to carry out audits, inquiries, and impact evaluations to determine whether AI applications are fair, transparent, and compliant with moral and ethical principles. An independent body increases public trust in the use of AI, ensures responsibility, and offers a procedure for remedy in the event that human rights are violated as a result of AI.
  13. Ethical Frameworks: Firstly, the legislature should enact in black and white the ethical frameworks necessary to regulate AI and then follow thorough ethical standards that are applicable to the use of AI in criminal justice, with a focus on fairness, accountability, transparency, and respect for human rights.
  14. Ethical Review Boards: A crucial part in determining the ethical consequences of AI applications in criminal justice can be played by ethical review boards, which should be made up of specialists in law, ethics, technology, and human rights. These boards shall examine potential effects on individual rights, fairness, and social values before implementing AI technologies. The boards shall offer advice, make changes, or disapprove AI applications that carry major ethical dangers. Ethical review boards or committees shall act as a preventive tool to guarantee that the use of AI complies with ethical norms and human rights principles from the onset, preventing potential downsides and moral conundrums.

CONCLUSION:

Law enforcement, courtroom hearings, and corrections might all be transformed by the introduction of artificial intelligence (AI) technologies into India’s criminal justice system. Applications of AI include possibilities for improving effectiveness, minimizing biases, and optimizing resource allocation. However, as this study article has shown, the use of AI in criminal justice also raises important issues related to human rights and ethical issues that call for serious attention.

Critical human rights concerns related to AI-driven criminal justice systems include fairness and bias in AI decision-making, the right to a fair trial, accountability, transparency, privacy, and data protection. Fundamental concerns regarding the fair and just use of AI technology is raised by the possibility of biased algorithms, biased results, and limited human oversight. The establishment of robust safeguards and rules are essential to overcome these obstacles. Human rights concerns can be reduced and the ethical use of AI in criminal justice can be improved with the help of suggested strategies including varied and representative data, algorithmic transparency, human oversight, and ethical frameworks.

Although India has laws governing the use of technology and human rights, the development of AI necessitates the ongoing evaluation of how well they address AI-related issues. In order to ensure India’s commitment to safeguarding fundamental rights, it is equally important for AI applications to be compatible with international human rights norms like the General Data Protection Regulation (GDPR) and the Universal Declaration of Human Rights (UDHR). It’s crucial to create a balance between technical development and the protection of human rights as the criminal justice system develops. Transparency, public accountability, and inclusion can be enhanced through the collaborative involvement of stakeholders, such as civil society organizations and impacted communities, in the development, deployment, and oversight of AI technology.

In conclusion, ethical norms, fairness, and accountability must be prioritized together with a comprehensive strategy for the ethical integration of AI in Indian criminal justice. India can harness the potential of AI while preserving the rights and dignity of its population by aligning AI applications with human rights norms and putting in place strong safeguards. The path to a fair and just AI-driven criminal justice system necessitates ongoing discussion, close observation, and a consistent dedication to upholding human rights in the face of technological developments. India can only exploit AI’s revolutionary power for the benefit of society by maintaining the justice scales.

[1] “PM Modi Urges AI Integration in Courts for Smoother Justice Delivery | AI in Judicial System – Explainer | Technology & Science News, Times Now,”available at: https://www.timesnownews.com/technology-science/pm-modi-urges-ai-integration-in-courts-for-smoother-justice-delivery-ai-in-judicial-system-explainer-article-99498802 (last visited July 26, 2023).

[2] “Nearly 5 Crore Pending Cases In Courts, Over 69,000 In Supreme Court,”available at: https://www.ndtv.com/india-news/nearly-5-crore-pending-cases-in-courts-over-69-000-in-supreme-court-3768720 (last visited July 26, 2023).

[3] Alex Joseph and Dr Versha Vahini, “Is the Indian judiciary prepared for technological revolution finally? A critical analysis of judicial functioning in the COVID world,” 2022 Journal of Positive School Psychology 6819 – 6825–6819 – 6825 (2022).

[4] “What is Artificial Intelligence (AI) ? | IBM,”available at: https://www.ibm.com/topics/artificial-intelligence (last visited July 27, 2023).

[5] “Artificial intelligence (AI) | Definition, Examples, Types, Applications, Companies, & Facts | Britannica,”available at: https://www.britannica.com/technology/artificial-intelligence (last visited July 27, 2023).

[6] “UP Police To Go Tech-Savvy To Prevent Crimes, Signs MoU With IIT Kanpur For Predictive Policing,”available at: https://swarajyamag.com/insta/up-police-to-go-tech-savvy-to-prevent-crimes-signs-mou-with-iit-kanpur-for-predictive-policing (last visited July 27, 2023).

[7] “artificial intelligence | Artificial intelligence to create smart cops in Assam – Telegraph India,”available at: https://www.telegraphindia.com/north-east/artificial-intelligence-to-create-smart-cops-in-assam/cid/1684018 (last visited July 27, 2023).

[8] “Artificial Intelligence in Criminal Justice: How AI Impacts Pretrial Risk Assessment,”available at: https://blog.carlow.edu/2021/07/27/artificial-intelligence-in-criminal-justice/ (last visited July 27, 2023).

[9] “AI and Indian Criminal Justice System – iPleaders,”available at: https://blog.ipleaders.in/ai-and-indian-criminal-justice-system/#Use_of_AI_by_the_judicial_system (last visited July 27, 2023).

[10] “How Data Annotation drives precise AI Video Analytics | LinkedIn,”available at: https://www.linkedin.com/pulse/how-data-annotation-drives-precise-ai-video-analytics-tagx/ (last visited July 27, 2023).

[11] Danielle Leah Kehl and Samuel Ari Kessler, “Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing” (2017).

[12] Nigel Stobbs, Daniel Hunter and Mirko Bagaric, “Can sentencing be enhanced by the use of artificial intelligence?” Criminal Law Journal (2017).

[13] Daniel Martin Katz et al., “Natural Language Processing in the Legal Domain” SSRN Electronic Journal (2023).

[14] “‘They track every move’: how US parole apps created digital prisoners | US prisons | The Guardian,”available at: https://www.theguardian.com/global-development/2021/mar/04/they-track-every-move-how-us-parole-apps-created-digital-prisoners (last visited July 27, 2023).

[15]Drew Roselli, Jeanna Matthews and Nisha Talagala, “Managing bias in AI” The Web Conference 2019 – Companion of the World Wide Web Conference, WWW 2019 539–44 (2019).

[16] Terrence Neumann and Nicholas Wolczynski, “Does AI-Assisted Fact-Checking Disproportionately Benefit Majority Groups Online?,” 11 480–90 (2023).

[17]Jasper Ulenaers, “The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge?,” 11 Asian Journal of Law and Economics (2020).

[18]Aleš Završnik, “Criminal justice, artificial intelligence systems, and human rights,” 20 ERA Forum 567–83 (2020).

[19]Roxana Daneshjou et al., “Lack of Transparency and Potential Bias in Artificial Intelligence Data Sets and Algorithms: A Scoping Review,” 157 JAMA Dermatology 1362–9 (2021).

[20]Angelika Adensamer, Rita Gsenger and Lukas Daniel Klausner, “‘Computer says no’: Algorithmic decision support and organisational responsibility,” 7–8 Journal of Responsible Technology 100014 (2021).

[21]Eleni Kosta, “Algorithmic state surveillance: Challenging the notion of agency in human rights,” 16 Regulation & Governance 212–24 (2022).

[22]Andrea Alonso and Jeffrey J. Siracuse, “Protecting patient safety and privacy in the era of artificial intelligence” Seminars in Vascular Surgery (2023).