ijalr

Trending: Call for Papers Volume 6 | Issue 2: International Journal of Advanced Legal Research [ISSN: 2582-7340]

FROM MORAL FAULT TO ALGORITHMIC HARM – THE BREAKDOWN OF MENS REA IN CRIMES SHAPED BY ARTIFICIAL INTELLIGENCE – Princy Verma & Tarun Sharma

ABSTRACT

The rapid integration of artificial intelligence (AI) into domains involving decision-making with profound legal consequences has exposed foundational fractures within criminal law, particularly in its reliance on mens rea as the primary marker of moral and legal culpability. Traditionally conceived as the mental element accompanying a criminal act, mens rea presupposes a human subject capable of intention, foresight, and moral judgment. However, contemporary harms increasingly arise from algorithmic systems whose operations are autonomous, opaque, probabilistic, and distributed across multiple human and non-human actors. This article critically interrogates the resulting doctrinal dissonance, conceptualized here as the shift from “moral fault” to “algorithmic harm,” and examines how the classical architecture of criminal liability struggles to accommodate AI-shaped wrongdoing.

Adopting a comparative and critical legal methodology, the research analyzes how India, European Union (EU), & United States (US) confront the erosion of mens rea in crimes mediated or shaped by AI. It argues that existing criminal law frameworks inadequately address responsibility gaps created by algorithmic decision-making, wherein culpability is diffused among developers, deployers, data curators, corporate entities, and regulatory authorities, while the immediate harm appears to be caused by an ostensibly autonomous system. In the Indian context, the continued reliance on anthropocentric notions of intent under the Bhartiya Nyaya Sanhita, 2023 (which replaced Indian Penal Code, 1860) reveals a normative lag, with limited doctrinal tools to address algorithmic agency. In contrast, the EU’s precautionary and regulatory turn, exemplified by the proposed AI Act, reflects an emerging preference for ex ante governance over ex post criminal attribution. The US, meanwhile, demonstrates a fragmented approach characterized by functional liability, prosecutorial discretion, and a growing reliance on civil and corporate accountability mechanisms.

The research critiques the inadequacy of extending personhood or intent to AI systems, warning against both doctrinal fiction and accountability evasion. Instead, it proposes a reconceptualization of mens rea grounded in foreseeability, systemic risk creation, and culpable human design or deployment choices. By foregrounding structural power, technological opacity, and institutional responsibility, the research advances a normative framework that shifts criminal law’s focus from individualized moral blame to collective and organizational culpability. Hence, this research contends that unless criminal law evolves to meaningfully address algorithmic harm, it risks losing both its moral coherence and its legitimacy in an increasingly automated society.

Keywords: Mens Rea, Artificial Intelligence, Algorithmic Harm, Criminal Liability, Moral Culpability, Responsibility Gap, Autonomous Systems.

BACKGROUND

Artificial intelligence has rapidly migrated from experimental research laboratories into the core infrastructures of governance, commerce, and security. Algorithmic systems now shape decisions in criminal justice (predictive policing and sentencing tools), transportation (autonomous vehicles), finance (high-frequency trading), healthcare (diagnostic algorithms), and national security (surveillance and threat assessment). Unlike earlier automated tools, contemporary AI systems, particularly those based on machine learning, operate through probabilistic inference rather than deterministic logic. They evolve over time, generate outputs not explicitly programmed by humans, and frequently resist transparent explanation.[1]

This transformation has profound implications for legal responsibility. Criminal law has historically functioned on the assumption that harmful conduct can be traced to a discernible human agent whose mental state justifies punishment. AI disrupts this assumption by inserting a non-human decision-making layer between human action and harmful outcome. As a result, harms occur without a clearly identifiable guilty mind, creating what scholars describe as a “responsibility gap.”[2]

Philosophical Importance of Mens Rea in Criminal Law

The doctrine of mens rea occupies a foundational position in criminal jurisprudence. Rooted in the maxim actus non facit reum nisi mens sit rea, the act does not make a person guilty unless the mind is also guilty, it embodies the moral intuition that punishment is justified only where wrongdoing reflects culpable choice. Philosophers from Aristotle to Hart have emphasized that moral blame presupposes agency, rational deliberation, and the capacity to choose otherwise.

Modern criminal law operationalizes this moral insight through graded mental states: intention, knowledge, recklessness, and negligence. These categories allow courts to distinguish between accidental harm and blameworthy conduct, ensuring proportionality in punishment. Without mens rea, criminal law risks collapsing into a purely consequentialist system that punishes harm irrespective of moral fault.[3]

Tensions Between AI Automation and Criminal Law

AI-mediated harm challenges each of these assumptions. Algorithmic systems do not “intend” outcomes in the human sense, nor do they possess awareness or moral understanding. Yet they generate decisions that can cause death, discrimination, or massive economic loss. When an autonomous vehicle kills a pedestrian, or a predictive policing algorithm disproportionately targets marginalized communities, the harm is real, but the mental element is elusive.[4]

Traditional criminal law responds poorly to this scenario. Prosecuting individual programmers often fails due to lack of direct intent or foreseeability. Corporate liability may dilute moral blame, while strict liability risks unjust punishment. This tension exposes a structural mismatch between nineteenth-century doctrines of culpability and twenty-first-century technological realities.[5]

[1] V.A. Tyrranen, Artificial Intelligence Crimes, 3(17) Territory Dev. 10, (2019), https://doi.org/10.32324/2412-8945-2019-3-10-13.

[2] Ibrahim Suleiman Al Qatawneh et al., Artificial Intelligence Crimes, 12 Acad. J. Interdisc. Stud. 143, (2023), https://doi.org/10.36941/ajis-2023-0012.

[3]Artificial Intelligence Crimes Internationally, 07 RIMAK Int’l J. Humans. & Soc. Scis., (2025), https://doi.org/10.47832/2717-8293.36.28.

[4] Id.

[5] Dr Mabroka Abdalsalam Mhajer Aqrira, Criminal Liability for Artificial Intelligence Crimes, 2025 ARID Int’l J. Soc. Scis. & Humans. 1, https://doi.org/10.36772/arid.aijssh.2025.6151.