ABSTRACT
India is facing a major challenge with the advent of new technology known as deepfake technology. Deepfakes are hyper realistic audio and visual content created through Generative Adversarial Networks (GANs) and advanced Machine Learning algorithms.[1] The combination of these technologies has raised several concerns within the Indian Criminal Justice System, with respect to; electoral fraud, loss of reputation, and safety of victims, particularly those who have been attacked with non-consensual sexual deepfakes.[2] At this time, the Indian Evidence Act (IEA) of 1872 (which is more than 150 years old) was developed to establish the guidelines for dealing with such evidences in criminal proceedings.[3] The current provisions of Section 65B of the IEA (because they were informed by the needs of electronic crime investigations) are ineffective when dealing with electronic evidence created through technology such as deepfakes which mimic human voices and physical characteristics with forensic precision that is virtually undetectable. [4]In this article, the author seeks to demonstrate that, based on an analysis of current evidence law in India, a comprehensive framework for authenticating, evaluating, and admitting evidence potentially created through deepfakes into the criminal trial must be created. To do this, the author compares the proposed Federal Rule of Evidence 901(c) and the NO FAKES Act (NFA) in the United States, the European Union’s AI Act (2024), and some of the emerging judicial decisions being issued by Indian Courts concerning this issue (specifically, the DELHI HIGH COURT injunctions issued in May-July of 2025), and argues that the best approach to introducing deepfakes into evidence in criminal proceedings, would be through the implementation of a hybrid authentication standard that combines traditional methods of authentication with forensic computer-analysis capabilities, and a means of shifting the burden of proof from the prosecution to the defendant[5]. Finally, the author proposes that specialized procedural guidelines for the introduction and use of deepfakes in Criminal Trials be developed and made widely available to the public. Keywords: Deepfakes, evidence authentication, AI-altered media, criminal procedure, Indian Evidence Act, digital forensics, burden-shifting, deepfake detection, fair trial rights, comparative evidence law.
INTRODUCTION The rapid pace of technological advancement in Artificial Intelligence has resulted in the development of synthetic media, also known as ‘deepfakes,’ which are capable of creating or changing video/audio content with a precision that was previously not possible. The core of any deepfake is built using a type of Machine Learning algorithm called a Generative Adversarial Network (GAN). GANs learn from existing data and create a new type of media that cannot be easily differentiated from original content. In India, where there is now a massive digital footprint with over 850million internet users, deepfakes have started to be used for political manipulation, harassment, bullying/defamation of individuals, and/or exploitation through sexual means.[6] The laws in India concerning the Criminal Justice system are based on the Indian Evidence Act of 1872. This Act is based on concepts and principles that were conceived before internet technology became commonplace. Therefore, there are no coherent legal standards in place for evaluating synthetically augmented evidence. In addition to this, high courts across the country have started to intervene in a manner that reflects the urgent need to come up with standards for evaluating digital evidence. [7]An example of this is the John Doe injunction (Sec 64 of the Indian Evidence Act) established by way of a May 2025 order from the Delhi High Court against deepfake fraud perpetrated by Ankur Warikoo and July 2025 orders from the Delhi High Court against Meta and X to take down AI-generated obscene images. However, these civil remedies mask a critical evidentiary vacuum where Indian courts having no reliable standard for determining whether an audio-visual exhibit offered as evidence in a criminal trial is authentic, altered, or entirely fabricated. The Indian Evidence Act’s primary mechanism for admitting electronic evidence — Section 65B certificates — requires certification by a person occupying a responsible position in relation to the operation of the relevant electronic system.This provision was designed for computer-generated metadata (bank logs, server records, system timestamps) where reliability inheres in mechanical, non-interpretive processing. Deepfakes present an entirely different evidentiary problem: the alleged evidence is itself the artifact of generative AI, not a byproduct of routine system operations.Requiring a Section 65B certificate for deepfaked video evidence — which may be created by a bad actor specifically to deceive—does not address the fundamental question of whether the media depicts real events or is synthetic. This paper advances three central arguments: First, Indian evidence law requires doctrinal reformation to distinguish between authentication (establishing chain of custody and mechanical integrity) and authenticity verification (determining whether synthetic media is actual or fabricated).The current framework elides this distinction, creating risk that AI-altered evidence passes authentication yet corrupts the trial’s truth-seeking function. Second, a hybrid authentication standard combining traditional authentication, forensic AI analysis, and burden-shifting mechanisms offers a pragmatic path forward.Inspired by the U.S. Advisory Committee on Evidence Rules’ proposed Rule 901(c) and adapted to Indian procedural contexts, this approach requires: (1) traditional authentication under Section 65A-65B; (2) if challenged, a preliminary showing by the opponent that the evidence could be AI-generated; (3) once credible challenge is made, the proponent must demonstrate authenticity by a preponderance of probabilities using forensic and contextual evidence; and (4) heightened corroboration requirements for convictions resting substantially on contested digital evidence. Third, comprehensive procedural and institutional reforms — including specialized protocols for suspected deepfakes, court-linked digital forensic laboratories, and revised practice directions — must accompany doctrinal change.Evidence law alone cannot solve an institutional capacity problem. The paper further addresses deepfake liability (Sections 66E, 66D, 67 of the Information Technology Act, 2000; Sections 111, 336, 353, 356 of the Bharatiya Nyaya Sanhita, 2023) and proposes a dedicated statutory framework for deepfake cyber-harassment crimes, informed by comparative study of the EU AI Act and the U.S. NO FAKES Act.
[1]Ian Goodfellow et al., Generative Adversarial Nets, Advances in Neural Information Processing Systems (2014).
[2]Robert Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Cal. L. Rev. 1753 (2019)
[3]Ratanlal & Dhirajlal, The Law of Evidence (LexisNexis, 27th ed.)
[4]Indian Evidence Act, 1872, 65B
[5]Mirsky & Lee, The Creation and Detection of Deepfakes, 54 ACM Computing Surveys (2021)
[6]Europol Innovation Lab, Facing Reality? Law Enforcement and Deepfakes (2022)
[7]Telecom Regulatory Authority of India, Indian Telecom Services Performance Indicators (latest)