Abstract
Generative artificial intelligence presents a governance challenge that no existing Indian statute fully addresses. This article examines the constitutional framework — particularly Articles 14, 19, and 21 — that must constrain any future regulatory intervention, surveys the fragmented statutory landscape comprising the IT Act, the DPDP Act 2023, and sector-specific directions, and proposes a risk-tiered legislative architecture appropriate to India’s constitutional order and developmental imperatives. It argues that a bespoke AI statute, rather than advisory-driven executive action, is both constitutionally necessary and normatively superior.
I. Introduction
The emergence of large language models and multimodal generative systems has precipitated what commentators have called an ‘information singularity’: an epoch in which artificial agents can produce text, code, imagery, and audio at scale, at near-zero marginal cost, and in ways that are frequently indistinguishable from human authorship.[1] For a constitutional democracy like India — one that simultaneously aspires to be a global technology hub and remains deeply committed to fundamental rights — the governance of generative AI raises questions that are as much juridical as they are technical.
India’s initial regulatory response was notable for its shortcomings. In March 2024, the Ministry of Electronics and Information Technology (MeitY) issued an advisory that effectively required intermediaries to obtain prior government approval before deploying ‘under-tested or unreliable’ AI models.[2]Lacking statutory backing, the advisory was withdrawn weeks later following industry opposition. This episode demonstrates the risks of governing emerging technologies through ad hoc executive action rather than through deliberate legislation.
India has set out policy goals through the National Strategy for Artificial Intelligence and NITI Aayog’s ‘Responsible AI for All’ framework, which focus on inclusivity, safety, and accountability.[3] Yet neither possesses legal force. The critical question is therefore: what regulatory architecture can govern generative AI in India that is simultaneously effective, constitutionally compliant, and conducive to innovation? This article attempts a structured answer.
II. The Existing Statutory Landscape
A. The Information Technology Act and Intermediary Rules
The principal statute governing digital conduct in India remains the Information Technology Act 2000 and the Intermediary Guidelines and Digital Media Ethics Code Rules 2021 framed thereunder.[4] These instruments, designed primarily for user-generated content platforms and e-commerce intermediaries, are ill-suited to generative AI. The safe harbour under Section 79 — which exempts intermediaries from liability for third-party content where they exercise due diligence — presupposes that unlawful content is uploaded by users, not generated autonomously by the platform’s own system. When a generative model produces defamatory, obscene, or otherwise unlawful output, the conceptual foundations of intermediary liability require fundamental reassessment.
[1]Anthropic, ‘Claude Model Card’ (2023); OpenAI, ‘GPT-4 Technical Report’ (2023). Large language models can generate coherent text, code, images, and audio at human-level fluency, fundamentally altering the information landscape.
[2]Ministry of Electronics and Information Technology (MeitY), ‘Advisory on Due Diligence by Intermediaries / Platforms’ (March 2024). The advisory, subsequently withdrawn, required prior government approval for deployment of “under-tested” AI models.
[3]MeitY, ‘National Strategy for Artificial Intelligence’ (NASSCOM-DSCI, 2018); NITI Aayog, ‘Responsible AI for All’ (2021).
[4]Information Technology Act 2000, No. 21 of 2000 (India), ss 66A (struck down), 69A, 79; Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021.