ijalr

Trending: Call for Papers Volume 4 | Issue 3: International Journal of Advanced Legal Research [ISSN: 2582-7340]

POTENTIAL WARFARE: THE OPTICS AND ADVERSITIES OF AI IN THE MILITARY ON INTERNATIONAL PLANE by -Jatin Karela & Yaksh Bhakhand

ABSTRACT

In recent years, the globe has seen a huge increase in the employment of Artificial Intelligence in various fields. As a result, a discussion has erupted about the deterministic and possibly transformational impact of such automated technology in the realms of military power, strategic competitiveness, and global politics. Since, automation in military power involves cross-border interaction and is not an issue within territory of a country, the public international law (PIL) creeps in. How PIL reacts to it? Is there any governance of such inclusion of automated technology in warfare? What if there is a breach of PIL by such advanced technology? Who is liable for it?

Following the initial wave of broad PIL speculation about AI, this research provides some much-needed specifics to the aforementioned issues. It contends that if left unaddressed, the vulnerabilities caused by AI’s rapid growth and diffusion in warfare might become a major source of instability in the global world and international regime as a whole. The article lists various laws monitoring the digital advancements and technology advances that will most likely have real-world implications for military applications.

INTRODUCTION

One bad algorithm and you’re at war.”

Sea Hunter, a sleek, 132-foot-long catamaran that such viewer readily identified as “a Klingon bird of prey,” was launched in April 2016 by the US Navy as well as the Defense Advanced Research Projects Agency (DARPA). The length of its fixed staff, regrettably, is far more unexpected than any of its looks: equal to 0. The unveiling of Sea Hunter, and even the innovation of operating systems that will authorize it all to actually function autonomously for long periods on the high seas, is the result of a perseverance by senior Navy and Pentagon authorities to reinvent the fate of maritime forces.

The design and implementation of autonomous weapons, also known as “killer robots,” provides irrefutable fighting pluses. The robotic warriors could significantly minimize deaths because they are relatively inexpensive and can perform 24 hours a day without getting tired. Furthermore, provide overwhelming attack defenders and allowing for a quick win if equipped sensors and artificial intelligence (AI).

However, using AI in war would significantly reduce human input of military exercises, inevitably result in violations of laws of war, and weakening the barriers preventing upsurge from traditional to nuclear warfare. Will indeed killer robots be able to discern among enemies and civilian passers in a dense public battle zone, as is needed by international law? Is it possible that a wolfpack of underwater poachers, hot on hike of an adversary navy ship holding nuclear-armed missile systems, will evoke the commander of that warship to fire its weapons to evade capture up in one dipstick US pre-emptive attack?

The United States, Russia, and a number of nations have expedited their growth of all such weapons, asserting that they would do this to avert their foes from having an unfair edge in such innovative approaches of wars. The present research puts forward a picture of development of AI. Then in next section dealing all its concerns heated up in the international plain and what challenges do AI pose to international laws and finally leaving a concluding remark, of this debate, from side of authors. The paper would also deal with some uncertainties surrounding the autonomous weapons, for e.g. their meaning, raison d’être. Eventually dealing with a proposed modality and framework in the context of use of force by states.

AUTOMATIZED ARTIFICIAL INTELLIGENCE

Unlike the traditional mechanized weapons employed by the countries both for offensive and defensive operations, like the Inter-Continental Ballistic missiles etc. the Lethal Autonomous Weapon Systems (L.A.W.S.) are designed to engage and effectuate the deployment with no active human intervention, these L.A.W.S. are based on the Artificial Intelligence which could use data to predict human behavior, procure intelligence, conduct surveillance, distinguish among the potential targets and discharge tactical decisions in virtual combat.2 This autonomous nature raises multiple concerns on the reliability of these weapons, especially in case of offensive strikes, where the automated nature of the engagement could possibly result into lethal destruction of life and property both of the civilian cadre and military personnel of other countries, essentially translating into an act or war against the other.

Artificial intelligence (AI) advancements have entered into weapon design and unconventional warfare, as well as many parts of an individual’s daily life. Software that operates robotics as well as software that assists decision-making processes linked to targeting are among the rapidly growing list of military uses of AI. The use of artificial intelligence (AI) to do activities traditionally handled by humans might radically alter how wartime choices to kill, hurt, destroy, or damage are made. The major problem is the lack of human control over these judgments, as well as the resulting unpredictability in results, which poses distinct legal and ethical issues.

Now, the military’s use of AI is becoming more prevalent, and the transition to completely autonomous robots designed for battle is now possible, however, this increased use of AI generates what is known as the “frontier risks.” Frontier risks are low-probability, greater high impact threats that humans may encounter as they explore new areas, whether technologically, ecologically, or territorially.3 The frontier risk associated with L.A.W.S. is illustrated through the metaphor “black swans” which is a term used to describe high-impact events that are virtually impossible to predict. Such that, the complete military armament of autonomous weaponry poses a number of hazards, including catastrophic repercussions from army raids and an uncertain future of humanity in the age of machine sentience.4 The critical issue surrounding the operation of the L.A.W.S. weapons is their unpredictability. All of the results of deploying an autonomous weapon cannot be foreseen with a fair level of certainty. The weapon’s design or the interplay between the system and the environment of operation might cause this unpredictability. As weapon systems become more sophisticated or given increased flexibility of action in their missions, foreseeing consequences may become more difficult, and therefore less predictable.5 Uncertainty regarding how a weapon would function in the field impairs the capacity to conduct a legal assessment since it makes it hard for the reviewer to establish if the weapon’s use would be forbidden by any international norm, or whether its malfunction would directly attribute liability on to the host State.

These L.A.W.S. are equipped to minimize human intervention and utilize Artificial Intelligence as a resource and in a manner so as to achieve precision, however, the degree of automation conceded has resulted into different approaches in enquiring and attributing the liability of a host state under international norms, especially when the international law is unperceptive on the issue.

Extent of autonomy

The distinction between autonomous and non-autonomous weapons is not as apparent in military weapons research as it is in other fields. The logical standard implied in the idea of autonomy varies greatly across researchers, states, and organizations, and in determining the liability arising due to these weapons, there autonomous nature has to be looked upon.

Certain scholars such as Heather Roff have described L.A.W.S. of being capable of learning and adjusting their behavior while adapting to shifting circumstances in the environment in which they are deployed, as well as prepared to make offensive judgements on their own.6 Here, the words “Learning…adjusting…. adapting” lays down a high threshold to constitute a system as an Autonomous defense or offense system. However, certain other scholars such as Prof. Peter Asaro consider any weapon system competent of discharging fatal force without the intervention, decision, or approval of a human supervisor to be autonomous. The term “autonomous” refers to a L.A.W.S. that operates partially or entirely without human involvement.7 Further, an L.A.W.S. need not need to be able to make judgments entirely on its own, to come within the ambit of “autonomous.” Instead, it should be considered as autonomous if it participates in one or more phases of the “preparation phases” from recognizing the target until shooting.8

However, domestic understanding of autonomous weapons as shaped by Ministry of Defense, United Kingdom is based on a narrow understanding of “autonomous” possibly to escape a true “autonomous” action by an L.A.W.S. It is defined as AWAS systems which are capable of comprehending higher level of intent and direction are autonomous systems per the MoD, U.K., such a system can take necessary measures to achieve a desired result based on its comprehension and perception of its surroundings. It can choose a course of action from a variety of options without relying on human monitoring and control; yet human interaction with the system could still be present. So, an unmanned aerial aircraft’s overall activities will be predictable, individual acts of the system may not be perceivable.9

Therefore, a consensus as to the definition of “autonomous weapons” is required, as the narrow and broader threshold to include an action within unpredictability element of an L.A.W.S. could determine the liability of the host state with respect to the malfunction of the weapon in question.

Arguing over the delegation to ‘automated intelligence’

While arguing over the militarization of ai technology there are some who believe that autonomous weapons or robots can function more efficiently, effectively, and with greater compliance than humans. They would not be motivated by the “concept of self-preservation,” so they’re more precise than humans. Furthermore, these bots are devoid of any real impulses such as paranoia, hysteria, fear, aggressiveness, prejudice, which can have a negative influence on human judgement and lead to unwanted and unwarranted actions, and are more prone to “cognitive fallacies,” where the decisions could be easily “cherry picked” based on preconceived notions resulting in flawed decisions and extending collateral damage.10

A modern AI would not act arbitrarily or base its conclusions on ill-informed source because it will be equipped with unprecedented integration, allowing them to collect data from multiple sources, analyze the situation holistically, and then deploy force, translating to a process that is impossible for humans, mainly in terms of speed. It is suggested that extensive study may be conducted and that a compromise can be reached on a single ethical paradigm to be applied to these robots in order for them to behave themselves ethically throughout missions. However, given the diversity of cultural approaches to ethics and the variety of morals and ethics theories, this can be challenging.11

An alternative school of thought views the complete delegation of command of L.A.W.S. for decision-making to computers as problematic, arguing that robots should never be allowed to violate the sanctity of life. Loss of control over the machines would be disruptive and undesired since there’d be a higher risk of accidental escalation, inability to halt the attack on time etc. There are also one-of-a-kind scenarios that might emerge at any time, necessitating human judgement to change plans and adjust to them, which pre-programmed machines would lack.12

As human intervention does not imbibe mechanical carrying out of orders, but a look at the intent of the superiors is also seen, as objectives are to be carried out. Even in certain situations it may be that these orders must be flouted in the broader ends of equity and humanity like in the Nuremberg trials where the military commanders were not excused for their claim, that the actions of the Germany forces were the result of orders received from Hitler. In case of AI, it may be possible that in the early phases of its development it is incapable of acting on the notion of justness and morality, and it could go to any extent to achieve an “immoral” and irrational objective.

Thus, extensive study has been conducted and proponents of these two opposing paradigms have presented numerous arguments, the world has yet to reach a consensus on either of them. An attempt is made in this paper to provide a general framework with regard to the status of L.A.W.S.

NORMS OF INTERNATIONAL LAW WITH GROWING AI

The post-World War II as well as Cold War generations saw the upsurge and discoverability of the sophisticated international law regulatory regime.13 International law is a legal order meant to design the engagement among agencies involved into it and crafting world affairs.14 Especially considering its dearth of regulatory and compliance frameworks, a few authors believe that international law is indeed not “law,” per se15, yet some make an argument that mostly all countries recognise almost all rules and standards of international law crafted till now, that too most or all of the time.16 International law, from the periods of its codification aided in the preservation of peace and harmony, the resolution of inter – state socioeconomic competing claims, as well as the protection of the global society’s needs and desires.17

Same as the global arms race or global warming, AI is a concern that impacts the entire world.18 Whether that be in money system, turmoil scenarios, online networks, or data collection, moving up AI touches and therefore will persist to penetrate national boundaries; as Erdélyi and Goldsmith point out, purely domestic retorts to this rapidly increasing challenging task might very well quarrel and cause too many troubles than they’re solving. Besides that, secluded regional or institutional solutions to overcome the emerging research as well as supervisory

Law (March 10, 2020). https://rsilpak.org/2020/international-law-and-militarization-of-artificial- intelligence/#_ftnref5.

difficulties caused by AI may indeed be quick on the trigger, neglecting protective profits in order to be the one to touch a new standard in automated processes.19 Advancement of AI offers an option for international law to cover the gaps that domestic legislation cannot.

An issue stems from the assertion that international law is indeed not regulations in the conventional concept of domestic law, wherein the head of state (sovereign) establishes a set of rules which all community members must follow.20 The states that engage in international legal frameworks are sovereign in their own right.21 The key doubts are: under which situations do nations demonstrate compliance with international law and responsibilities, & in what way can such a broad adherence be used to build a global governance and regulatory configuration for AI technology advancement in the military of States?

States have already been employing artificial intelligence weapon systems in military conflicts, changing the scope of battle from a purely human venture. In Afghanistan, for example, the US had deployed SWORDS (Special Weapons Observation Reconnaissance Direct Action System) robotic arms to pinpoint and neutralise improvised exploding equipment. Those robots, on the other hand, have limited powers and need to be directed and controlled by humans. Republic of Korea as well employs a ranger robot, the SGR-1, on its frontier with the Democratic People’s Republic of Korea, but it still has constrained powers. Something like an infrared satellite, a connectivity satellite, and a collective of intercepting capacitors, the Terminal High Altitude Area Defense System (THAAD) was designed to independently distinguish, participate, and dismantle short and mid-range ballistic missiles. Similarly, the US is working on the Perdix Mechanism, one that tries to produce armed but unmanned aerial vehicles multitudes that can rearrange themselves when any drone or drone section is lost.22

Challenges to International Law with growing AI

Article 36 of Additional Protocol I stipulate that almost all recently designed weapons have to be checked to make sure that they really do not contravene international law. Intelligent machines present major threats to international law in terms of personal responsibility and attribution.23

Rebel fighters as well as citizens should always be characterised underneath the postulate of distinction, and then only the first may be clearly aimed.24 For certain if machines lacking human judgement and rationale will indeed be allowed to navigate under this concept is subject to debate. As per the basic premise of proportionality, the demise of innocent life as well as concussion to innocent people may not even be exorbitant in comparison to the eagerly awaited strategic superiority in military.25 To determine as to if fully independent firearms can perhaps abide with all of this, international legal professionals, researchers, policy experts, and statisticians must engage in a complete explanation.

Besides that, there are grave questions that rogue nations, radical groups, and terrorist organisations could use this new tech invisibly, endangering lives. There is no official organization in the international platform that explicitly regulates the growth and spread of this autonomous weapons system.

The main issue is attribution and accountability, as it is still ambiguous that who will be held legally responsible if, for example, a private citizen is mistakenly victimised by a fully

L.A.W.S. To ascertain a government’s failure, attribution affirms that an act regarded internationally wrongful originates from a certain state. No state is allowed to avoid responsibility or accountability under the guise of complete weapon autonomy. The fear is that nations will refuse to recognize that autonomous weapons’ deeds are ascribable to them and therefore bear their obligation. Human supervision can and should guarantee the authority to inhibit the function even before goes horribly wrong, so that countries that break international law can always be made liable.26

Due to the above hurdles, many countries, e.g. Pakistan, have started to call for a moratorium upon the manufacturing capacity of L.A.W.S. Pakistan made the argument well before United Nations General Assembly’s First Committee on Disarmament and International Security that just about any weapon which delegated life or death choices to automated systems is by definition ethically wrong and can never be in conformity with international law, which would include International Humanitarian Law and Human Rights Law.27

Absence Of International Law on Artificial Intelligence

The introduction of automated firearms is indeed a new innovation, and so it remains to be observed whether laws, if any, will emerge to monitor and control those certain weapons, as

well as in what way already existing law can adapt and change. The Association of Governmental Experts, which is mainly composed of the contractual conference of the Parties on Many of those Conventional Arms, is conducting wide ranging dialogues and making commendable efforts.28 The GGE is expected to serve as a springboard for the development of international law as well as an international organization to monitor and control lethal autonomous military machinery. Nations are still the traditional subject matter of it and actors in international norms and standards, and also the innovators of such an advanced technology, especially relevant global superpowers, have a hesitancy about regulating it. This seems to be true of the United Nations, one that functions under their power of veto.29

Artificially intelligent institutions should always be regulated and governed by a unified global governing authority. This blueprint should indeed confront the amount of autonomy granted to all of these institutions, as well as who will be held liable in the event of a breach. Anything else other than consciously controlled human units capable of navigating, detecting, and engaging a target ought to be prohibited. The very same insight was managed to develop at the 2013 gatherings of the Group of Governmental Experts. It would also allay concerns that living beings would be at the tender mercies of machines in completely independent and done by machines armed conflict. A machine should be allowed to make the call to attack an individual, but that judgement call authority cannot be completely transferred. To enhance transparency, guidelines have to be precise, as insightful control is required, as well as concessional or unjustifiable abilities ought not be ceded to a machine in order for keeping accountability intact. Methodologies to control

Because it has become clear that advancements in artificial intelligence could very well enable the mobilisation of more and more unmanned systems of weapons, and also that big nations will further work to undermine these same huge advances for military superiority, forecasters with in international security and civil rights community organizations, ended up joining by amenable dignitaries and the others, had already started seeking to come up with plan and control for governing or outlawing such systems exclusively.30 Nations signed a treaty in 1980 that constrains and bar the utilisation of some kinds of weapons. This convention on Certain Conventional Weapons (CCW), states that those mentioned certain kind of weapons that are perceived to implicit useless harm to rebel fighters or even to harm innocent people wantonly shall be duly regulated. To these nations formed an organization of regulatory professionals to appraise the damage wrought by completely autonomous system of weapons and to take into account viable approaches. A few authorities have attempted to resolve these issues on their own, while civil society has also gotten involved.

A few really straightforward approaches for curtailing such frameworks have resulted in the emergence of such a technique. Either the first or clearest step will be the incorporation of an internationally legally enforceable moratorium on the expansion, deployment, including the use of wholly unmanned weaponry systems under the CCW. Even with the 1995 ban on electrocuting laser weapons and a 1996 benchmark curtailing the use of mines, improvised explosives, and other similar technologies, such a prohibition could take the look at the new CCW protocol, an instrument devised to confront specific weapons not conceived in the previous convention.31 2 dozen countries, endorsed by human rights organizations like the Campaign to Stop Killer Robots, had already regarded for the negotiation of a new CCW guidelines that would prohibit truly automated military hardware from ever being used.

Supporters of the measure argue that such blanket prohibition seems to be the only method to escape unwarranted problems from escalating with that it is the only method to escape eventual infringements of international human rights law or international norms. Opponents claim that fully independent military hardware can indeed be built smart sufficiently to counteract apprehensions about international norms, so no restrictions on their evolution should be positioned on them. Because CCW individual countries’ consultations are guided by common understanding, just few nations with industrial robot installations, including Russia, the United Kingdom, and the United States, have thus far clogged assessment of this kind of security rules. A further recommendation made at the specialists’ gatherings by delegates from France and Germany is to utilize a diplomatic proclamation proclaiming the notion of human direct authority over military grade weapons, along with a purely symbolic standards of ethics. One such indicator, which could come in the form of a UN General Assembly resolution, might well impose human control over completely autonomous weapons at any and all moments in place to confirm adherence to policies of military conflict as well as international human rights law, as well as certain level of guarantee. The code can indeed hold states accountable for just about any malfeasance committed with wholly autonomous weapons systems in combat, as well as requiring that all these weapons be overseen by humans in order to deactivate the machine if it goes haywire. States may be considered necessary to subordinate envisaged automation arms to extensive pre-deployment diagnostics in a sensible manner to determine whether it satisfies specified requirements.32

Those who are in balance in favour of something like a legally enforceable ban underneath the CCW argue that this option will underperform to stop the armed conflict in entirely unmanned systems and will permit a few nations to deploy weapons with hazardous and uncontrollable abilities. Someone else conclude that a blanket prohibition may never be possible to achieve and contend that such a vaguely worded dimension of such a type is the ultimate choice we have at present.

Some other strategy that is gathering steam is a densely packed emphasis on the ethical consideration of deploying completely automated weapons systems. According to the same viewpoint, only living beings have the moral potential to rationalise the mass murder of some other life form or any other human being, and machineries don’t and will never get to have that ability. The Martens clause of the Hague Convention of 1899, which is already enshrined in Additional Protocol I of the Geneva Conventions, states that civilians as well as combatants continue to stay underneath the safeguard and prerogatives “of the international norms derived from established custom, from of the dignity of the human person, and dictates of human conscience”.33 Political enemies of perfectly autonomous weapons systems argue that by attempting to remove individuals from deciding upon the matter of life and death, of this kind weapons seem to be in itself incompatible with human values and sense of morality, and should therefore be prohibited. The Intelligence Community has apparently managed to build up a sense of purpose and direction for such armed services operations towards relatively secured, morally correct, and fully accountable use of AI and unmanned systems of weapon, indicating that they are concerned about the actual problem.34Only a very few completely autonomous robotic armaments are currently used throughout modern warfare, and yet many nations are designing and implementing a huge spectrum of machineries with high levels of automation. Countries are adamant about getting these weapon systems into the hands of their citizens as soon as possible, lest they fall behind in an armed conflict for individuality. Before this self-driving weapons systems get to be increasingly

commonplace, dignitaries and policy experts must weigh the pros and cons of a blanket ban and take into account other options to focus on ensuring they are not being used to devote unlawful activities or provoke potentially disastrous increment.

QUESTIONS SURROUNDING L.A.W.S. IN THE CONTEXT OF USE OF FORCE

The question of attributing the liability of the damage caused by an autonomous weapon becomes important when considering the right of self-defence of a state under Article 51 of the

U.N. Charter and the developments such as use of pre-emptive strikes.

In a cyber environment, apportioning blame for a use of force to a specific state or entity is very challenging. This is related to the nature of cyberattacks as well as the internet’s structure. Cyber operations like the Stuxnet worm, for example, may employ malicious software, and it may be able to detect an assault and even determine that the attack is the product of a specific software.35 However, identifying the malicious software’s creator is challenging. It could be feasible to reverse engineer the software to retrieve the original source code, which could provide information about the creator, but this is not successful in every attack.

The obscurity involved in the deployment of such an algorithm may make it difficult for a victim state to determine whether the assault was deliberate or unintentional. For example, understanding the logic behind a machine learning conclusion may be challenging, and in any case, it may be simple for the aggressor to hide or mask purpose inside the code. Of course, the “aggressor” state might argue that this was an accident, while the victim state could claim the right to retaliate regardless of purpose. Additionally, The U.S., U.K. along with certain other countries have claimed that states have the right to defend themselves against armed groups operating on their territory but without attribution to the other country. That is, the rights of self-defence is based on the need for the victim state to employ force to defend itself, not on the identity of the attacker. However, nations and experts have acknowledged the necessity to identify the entity behind the armed attack, even if it is an armed group rather than a state, before adopting a violent reaction in these circumstances.36

In essence, the question posed by the possible use of untraceable armed assaults is whether a victim state that is attacked beyond its boundaries by an algorithm has the right to retaliate even if it is unable to identify the entity—state or armed group—that is responsible for the algorithm. If this is the case, governments must examine whether further restrictions should be imposed in light of the unique circumstances. Thus, any future framework attributing liability on states for the acts of L.A.W.S. should require due diligence by the victim state before retaliating in the matters exercising self-defence.

Further, In the use of force context, openness and ability to explain are especially essential. The UN Security Council, allies, other nations, and local and foreign stakeholders may urge a state to justify why it used force in a specific way. In some circumstances, another state may be able to bring a dispute against it in the International Court of Justice, forcing it to account for its decisions more explicitly. Furthermore, being clear about what facts influenced a decision to use force might assist a state satisfy other countries that its use of force was justified. 37 As a result, governments must be prepared to answer concerns regarding the employment of algorithms in the jus ad bellum setting, where foreign states and the general public will be unhappy with a government response that claims, “We started a war because the computer ordered us to.”

CONCLUSION

The emerging innovation and threat which the L.A.W.S. weapons pose is unprecedented, where it is unknown whether the probability of malfunctioning of an autonomous system poses a larger threat surrounded by its use or whether its utility outweighs this probability, as currently this technology is still in its initial phases of development, with more of a domestic use rather than militarization, even its limited use in the military is not in the form of offensive strikes and is used only to gather intelligence or aid the humanized weaponry which the state possess and no command centre is associated with an AI force. Such that, overly stringent mechanisms curtailing the use of the technology may be anti-utilitarian. Thus, until the initial developmental phase is completed, a minimal but complete code for its development, use and testing must be laid down by the international community.

As machine learning technologies spread, there are some potentially larger implications for the use of force, which we note here for further study. One big concern is whether the progressive use of learning algorithms and, the ultimate automation of cyber responses would enhance or lessen the possibility of governments using force, either offensively or defensively. Further, other questions such as whether the technologies will help governments that currently have technologically advanced military to become even more powerful and eliminate impediments to resorting to force? Will these tools rather serve as a little deterrent, at least among state

1 Students at National Law University, Jodhpur

2 Jake Okechukwu, Weapons powered by artificial intelligence pose a frontier risk and need to be regulated, The World Economic Forum (June 23, 2021). https://www.weforum.org/agenda/2021/06/the-accelerating- development-of-weapons-powered-by-artificial-risk-is-a-risk-to-humanity/.

3Id.

4 Sasha Radin, Expert views on the frontiers of artificial intelligence and conflict, The Internal Committee of the Red Cross (March 19, 2019). https://blogs.icrc.org/law-and-policy/2019/03/19/expert-views-frontiers-artificial- intelligence-conflict/.

5NettaGoussac, Safety net or tangled web: Legal reviews of AI in weapons and war-fighting, The Internal Committee of the Red Cross (April 18, 2019). https://blogs.icrc.org/law-and-policy/2019/04/18/safety-net-tangled- web-legal-reviews-ai-weapons-war-fighting/.

6 Heather M. Roff, Lethal Autonomous Weapons and Jus Ad Bellum Proportionality, 47 CASE W. RES. J. INT’L L. 37 (2015).

7 P. Asaro, On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making, 94(886) INTERNATIONAL REVIEW OF THE RED CROSS. CAMBRIDGE UNIVERSITY PRESS 687 (2013).

8Id.

9MINISTRY OF DEFENSE, Report on Unmanned aircraft systems (JDP 0-30.2), 2 (Sept. 12, 2017). https://www.gov.uk/government/publications/unmanned-aircraft-systems-jdp-0-302.

10RONALD C ARKIN, A ROBOTOCIST’S PERSPECTIVE ON LETHAL AUTONOMOUS WEAPONS SYSTEMS 221 (2017).

 

11Id. at p. 231.

12Usman Ahmad, International Law And Militarization Of Artificial Intelligence, Research Society of International

13 Oscar Schachter, The UN Legal Order: An Overview, 3 THE UNITED NATIONS AND INT’L. L. 17 (1997).

14Rüdiger Wolfram, International Law, 1 MAX PLANCK ENCYCLOPEDIA OF PUB. INT’L. L. 1, 16 (2006).

15 John Bolton, Is There Really Law in International Affairs, 10 TRANSNAT’L L. & CONTEMP. PROBS 29 (2000).

16LOUIS HENKIN, HOW NATIONS BEHAVE 47 (2nd ed. 1979).

17Id.

18 Olivia Erdélyi& Judy Goldsmith, Regulating Artificial Intelligence: Proposal for a Global Solution, ASSOCIATION FOR THE ADVANCEMENT OF ARTIFICIAL INTELLIGENCE 1, 2 & 9 (2018).

19MARGARET A. BODEN, ARTIFICIAL INTELLIGENCE: A VERY SHORT INTRODUCTION 19 (2018).

20 Samantha Besson, Sovereignty, MAX PLANCK ENCYCLOPEDIA OF PUBLIC INTERNATIONAL LAW (2011).

21JianmingShen, The Basis of International Law: Why Nations Observe, 17 DICKSON INT’L L. 287 (1999). 22AjeyLele, A military perspective on lethal autonomous weapon system, UN Office for Disarmament Affairs (Nov. 30, 2017). www.un.org/disarmament.

23 Neil Davison, A legal perspective: Autonomous weapon systems under international humanitarian law, the International Committee of red Cross (2018). https://www.icrc.org/en/download/file/65762/autonomous_weapon_systems_under_international_humanitarian_la w.pdf.

24 Additional Protocol I, Art. 48.

25 Additional Protocol I, Art. 51 (5)(b).

26Supra note, 23.

27The Nation, Pakistan calls for moratorium on production of Lethal Autonomous Weapon Systems, The Nation (2018). https://nation.com.pk/01-Nov-2018/pakistan-calls-for-moratorium-on-production-of-lethal-autonomous-

weapon-systems (last visited Oct. 1, 2021).

28Amandeep S. Gill, Lethal Autonomous Weapons System, UN Office for Disarmament Affairs(30 Nov 2017). www.un.org/disarmament.

29Supra note, 11.

30 Group of Governmental Experts Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS), Emerging Commonalities, Conclusions and Recommendation (August 2018). https://www.unog.ch/unog/website/assets.nsf/7a4a66408b19932180256ee8003f6114/eb4ec9367d3b63b1c12582fd 0057a9a4/$FILE/GGE%20LAWS%20August_EC,%20C%20and%20Rs_final.pdf.

31 United Nations, Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May be Deemed to be Excessively Injurious or to Have Indiscriminate Effects (and Protocols) (As Amended on 21 December 2001), (2020) 1342 UNTS 137.

32Supra note,29.

33Albert Camus, The Hague Conventions of 1899 and 1907, MEDECINSSENSFRONTIREShttps://guide- humanitarian-law.org/content/article/3/the-hague-conventions-of-1899-and-1907/.

34 Michael T. Klare, Autonomous Weapons Systems and the Laws of War, Arms Control Association (2019). https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war.

35 Ashley Deeks, Noam Lubell, &Daragh Murray, Machine Learning, Artificial Intelligence, and the Use of Force by States, 10(1) JNSLP23 (2019). http://repository.essex.ac.uk/22778/.

36Id.

37Supra note, 34.