ijalr

Trending: Call for Papers Volume 6 | Issue 4: International Journal of Advanced Legal Research [ISSN: 2582-7340]

ARTIFICIAL INTELLIGENCE IN BOARDROOM DECISION- MAKING: REASSESSING DIRECTOR’S FIDUCIARY DUTIES UNDER CORPORATE LAW – Ansh Tripathi

Abstract

The fast shift to Artificial Intelligence (AI) into corporate governance has been a fundamental change in the way decisions are made in boardrooms. The use of AI-powered systems, such as predictive analytics, risk assessment tools, and strategic decision-support platforms, is increasingly having an impact on top-level corporate decisions. Although these technologies improve the efficiency and data-based accuracy, at the same time, they undermine the traditional system of fiduciary responsibilities of directors, which is based on human judgment, independence, and accountability.

The paper discusses the legal aspects of AI-assisted board decisions based on fiduciary duties, specifically, fiduciary duty of care, fiduciary duty of loyalty, and fiduciary duty of oversight. It examines the impact that relying on algorithmic systems has on the quality of the reasonable director and considers to what degree the Business Judgment Rule ought to defend AI-influenced decisions. Risks related to algorithmic opacities, bias, vendor reliance, and data manipulation are also examined and the weakness of current doctrinal frameworks to deal with these issues.

The paper goes on to conduct a doctrinal and comparative approach to assess regulatory changes in major jurisdictions, such as the United States, United Kingdom, European Union, and India. It claims that, although fiduciary obligations are theoretically sound, when it comes to algorithmic governance, their implementation needs to be recalibrated. Specifically, the paper suggests that an AI-informed duty of care with an emphasis on informed reliance, active oversight, and accountability be recognized.

The paper concludes that AI does not reduce the fiduciary responsibility but increases the necessity of systematic forms of governance, more transparency, and clarity in regulation. Corporate law can make sure that technological innovation does not become decoupled to values of accountability and responsible governance by adjusting fiduciary norms to the realities of AI-borne decision-making.

Introduction

The fast development of Artificial Intelligence (AI) has already started remaking the principles of corporate governance and decision-making at the boardroom[1]. Conventionally, the corporate boards have been using human experience, managerial experience, professional consultants and deliberate thinking in making strategic and operational decision. The directors are supposed to assess the risks, analyze the financial information, oversee the compliance, and map the corporate long-term strategy by making informed and independent decisions. Nevertheless, the growing use of AI-based applications both in the form of predictive analytics tools and risk modeling systems to automated compliance solutions and algorithmic forecasting systems, has radically reshaped the informational environment[2].

The AI systems can handle large amounts of data at incomparable speeds and complexity to human ability. They are able to determine patterns, model results, measure market risks and also make recommendations that can influence large corporate decisions like mergers and acquisitions, investment policies, executive compensation programs and ESG policies[3]. Since corporations are including these technologies in governance procedures, AI is no longer a matter of operational management, but instead a part of the deliberative functions within boards.

Although this technological integration could lead to a higher efficiency, objectivity, and precision on the basis of the data, it also brings up serious legal issues. Corporate law subjects directors to fiduciary obligations, which are primarily the care, the loyalty, and the oversight duties, to hold directors accountable and responsibly govern. These obligations are based on human judgment, good faith, independent action and informed decision making[4].

The conflict between the algorithmic help and the classical fiduciary rules is the center of this question. The AI systems might be used as consultative instruments, yet they can also have a strong impact on the strategic decisions. Accountability is further complicated by the lack of transparency in algorithms, hidden bias, breaches to cybersecurity, and control of third-party vendors. In instances where an AI-inspired move results in a corporate injury, it is legally challenging to establish whether directors have met their fiduciary duties or not. The current fiduciary doctrine has been established during the period when all actors were humans and it is not explicitly related to the algorithmic involvement into the processes of governance.

This paper contends that AI does not destroy or reduce fiduciary duties, but it changes the context within which they will be applied. The introduction of AI in boardroom discussions requires the re-evaluation of the ways directors meet their duty of care, loyalty, and oversight. Specifically, fiduciary norms need to be updated with the concepts of responsible technological dependency, knowledgeable oversight of algorithmic systems, and greater accountability to governance. Directors should not delegate to the algorithm, but, they need to make sure that AI remains a tool that augments, but does not preclude, the independent and reasoned judgment[5].

[1] Erik Brynjolfsson & Andrew McAfee, The Second Machine Age (2014).

[2] Ajay Agrawal et al., Prediction Machines: The Simple Economics of Artificial Intelligence (2018).

[3] W. Bradley Wendel, The Promise and Limitations of Artificial Intelligence in the Practice of Law, 72 Okla. L. Rev. 21 (2019).

[4] Dana Remus & Frank Levy, Can Robots Be Lawyers?, 30 Geo. J. Legal Ethics 501 (2017).

[5] World Economic Forum, Corporate Governance in the Fourth Industrial Revolution (2019).