ABSTRACT
Artificial intelligence (AI) technologies now occupy a central position in consumer markets, shaping decision-making in areas such as finance, healthcare, e-commerce, digital services, and smart products. While these systems promise efficiency, personalization, and enhanced consumer welfare, they simultaneously introduce risks that challenge the core assumptions of consumer protection and product liability law. This article critically re-examines how the legal notion of product defectiveness should be understood in the context of AI systems capable of autonomous and adaptive behaviour. It contends that defectiveness remains the primary doctrinal tool for allocating responsibility between producers and users, but that its interpretation must be recalibrated to reflect the distribution of control over, and knowledge of, AI-related risks. Owing to algorithmic opacity, continuous learning, and limited user oversight, AI systems disrupt traditional liability frameworks by concentrating risk awareness with developers and deployers while reducing consumers’ ability to detect or mitigate harm. The article analyses challenges relating to causation, evidentiary burdens, biased and discriminatory outcomes, and misleading AI practices. It advances a regulatory approach that combines existing consumer protection principles with AI-specific obligations, including risk-based regulation, transparency and explain ability requirements, and strengthened mechanisms for consumer redress. A comparative assessment of the European Union and India illustrates emerging regulatory trends and remaining gaps. The article concludes that a redefined, control-sensitive standard of defectiveness is essential to ensure accountability, consumer autonomy, and sustainable innovation.
Keywords: Artificial Intelligence; Consumer Protection; Product Liability; Defective Products; Misleading AI; Algorithmic Accountability.
Introduction
Artificial intelligence has rapidly moved from experimental deployment to widespread integration within consumer-facing products and services. Automated systems now influence how consumers search for information, access credit, purchase goods, receive medical or financial advice, and interact with digital platforms. Algorithmic recommendation engines, virtual assistants, personalised pricing tools, and automated decision-making systems are increasingly embedded in everyday transactions.
From a regulatory standpoint, AI presents a dual narrative. On the one hand, it offers significant consumer benefits by reducing information costs, improving efficiency, and enabling tailored services. On the other hand, it generates complex risks that are not easily addressed by traditional legal frameworks. These include opaque decision-making, systemic bias, behavioural manipulation, and difficulties in identifying responsibility when harm occurs. The scale and speed at which AI systems operate amplify these risks and expose structural weaknesses in existing consumer protection regimes.
Consumer protection law has historically been designed to preserve consumer autonomy and ensure fair dealing in markets characterised by information asymmetry between traders and consumers. Product liability law complements this objective by allocating responsibility for harm caused by unsafe or defective products. Both frameworks are grounded in assumptions of human control, predictability, and the possibility of ex ante risk assessment. AI systems challenge these assumptions by functioning autonomously, evolving after deployment, and relying on complex data-driven processes that are largely inaccessible to consumers.
This article explores how consumer protection and product liability law should respond to AI-induced harm. It focuses on the legal concept of defectiveness as the primary mechanism for assigning liability and argues that defect standards must be reinterpreted in light of the control and risk awareness asymmetries inherent in AI systems. By doing so, the law can better align liability with those best positioned to prevent harm.
AI refers broadly to computational systems capable of performing tasks that typically require human intelligence, including learning, pattern recognition, reasoning, and language processing. Advances in machine learning, access to large datasets, and improvements in computational power have enabled AI to move beyond laboratory settings into mass consumer markets.
AI-enabled products and services can enhance consumer welfare in multiple ways. Automated fraud detection systems improve financial security, recommendation algorithms assist consumers in navigating complex choices, and smart technologies increase convenience and efficiency in daily life. In principle, AI can support informed decision-making by processing information at a scale beyond human capacity.
Despite these advantages, AI introduces structural risks that distinguish it from traditional consumer products. Algorithmic systems may replicate or intensify social biases present in training data, leading to discriminatory outcomes. Personalisation techniques can be deployed to influence consumer behaviour in ways that undermine genuine choice. Moreover, AI systems often operate as ‘black boxes’, preventing consumers from understanding how outcomes affecting them are produced.
These risks are compounded by the adaptive nature of AI. Systems that continue to learn after deployment may change behaviour in ways not anticipated at the time of market entry, complicating safety assessments and regulatory oversight. When such systems cause economic or physical harm, consumers may find it difficult to identify the source of the problem or the party responsible.