Abstract
With the recent rise in generative AI solutions to create hyper-realistic synthetic media or “deepfakes” have been associated with a set of legal and regulatory challenges due to their potential for misuse in impersonating someone else’s identity (creating fake news), sexual exploitation (creating child pornography), fraud (scamming people) and misinformation (political or social manipulation). The most recent controversy of the alleged use of Grok AI is an illustration of the potential to aid in amplifying the spread of harmful content by creating this type of content on a large scale. Furthermore, these events have raised questions regarding accountability in a distributed technology ecosystem.
This article discusses the limitations of the existing Indian law mechanisms to provide remedies for those who have suffered harm as a result of deepfakes, and to provide explanations for the harmful effects of deepfakes. Although there are many provisions in various pieces of legislation in India including Article 21 of the Constitution of India, the Information Technology Act, 2000, the Intermediary Guidelines, the Digital Personal Data Protection Act, the Bharatiya Nyaya Sanhita and the Copyright Act. However, these pieces of legislation offer fragmented relief, primarily in products (i.e., a reactive response to providing relief) and content (only related to the creation and distribution of harmful content) only. Deepfakes, due to their unique and dynamic qualities as compared to traditional forms of harm are composed of autonomous systems, scalable creations and probabilistic distributions making them not fit into the conventional scheme to determine and decide (or who would be liable for) who is responsible for creating the problem in the first place (i.e., who are liable in the chain of responsibility among developers, implementers/platform owners and consumers).
This article suggests that fault-based models that depend on human intention should not be used to evaluate distributed AI systems as they are not adequate. Instead, it proposes a tiered framework of Shared and Tiered Liability, incorporating a duty of care created by statute, duties of transparency and a risk-based classification of AI systems. Therefore, this paper calls for the creation of a stand-alone AI liability regime in India that will utilise a shift from the punitive approach of post-harm liability to a more effective regime of preventive governance to ensure accountability and promote responsible innovation.
Keywords
Deepfakes; Artificial Intelligence, Legal Liability; AI Governance; Intermediate Liability; Digital Regulation; Synthetic Media; Shared Liability Framework
INTRODUCTION
AI-generated artificial media (e.g., video, audio, photo), called deepfakes, have made it possible for people to create new forms of entertainment and communication that look so real that they cannot be distinguished from actual videos and recordings. Because they are created using generative AI technology which blurs the line between reality and fake, they can be made quickly.
Grok AI has become the focus of controversy and has highlighted the risk of using generative AI technology to create misleading and manipulated outputs and how they can increase the scale of deception/harm by creating content that appears to be real. In addition, it raised significant questions about the responsibility of platforms to control misleading and manipulated content being used on their platforms, creating algorithmic safeguards to prevent synthetically created misleading and manipulated content, and how quickly because created synthetically created misleading and manipulated content can spread to a widespread audience prior to being stopped. More importantly, this highlighted uncertainty as to who, if anybody, is liable if synthetically generated misleading and manipulated content creates reputational harm, psychological harm, or social harm.
India lacks a comprehensive legal framework that specifically addresses the use of artificial intelligence to create misleading and manipulated media and provide clear liability standards. All existing laws address harmful content after the content has been disseminated and do not address the autonomous and scalable creation of synthetically created media. This article posits that the creation of deepfakes has revealed a structural gap within the traditional liability doctrine and asserts that India needs to implement a specific legal framework that will allow for the addressing of Artificial Intelligence-related harms in a coherent and responsible manner.