The Ethical Dimension of Military AI


Last month, a first of its kind conference in the Netherlands sought to bring together a collection of states with the goal of promoting increased responsibility regarding the implementation and use of artificial intelligence within the global defence sector. The conclusion of the summit witnessed many major military powers, including the US and China, sign a non-binding agreement calling for increased caution and responsibility regarding military AI. Such an agreement reflects a justified concern for the rate which AI is increasingly being adopted by governments across the world to supplement their military capabilities. However, such a trend also necessitates nuance. AI is proving to be an exponentially powerful tool, especially within weapons systems and cyber warfare capabilities. In light of this emerging trend, the international community has an obligation to attempt to guide the development and promote the restraint of the military applicable AI. Such an outcome may be difficult to achieve, but the implications of inaction are frighteningly dire.

AI’s role within militaries

The role of artificial intelligence within militaries has been on an exponential increase in recent decades, in part owing to its immensely beneficial potential and also due to is popularity, necessitating nations to remain competitive. Many of uses of artificial intelligence are not as nefarious as one might conceive when thinking of military applications for technologies. Logistics are one crucial area where AI can be used to augment military performance, while posing relatively little cause for concern ethically. As the complexity of war increases, so too do the demands on military logistics branches and their supply chains (Rudas, Liu, 2021). Artificial intelligence may also prove to be immensely beneficial in its capacity to provide suggestive analysis to military leadership. Another by product of the aforementioned complexity of war is a dramatic increase in the variables at play on a modern battlefield. AI may therefore allow for a greater increase in the ability to process information and provide suggested solutions to commanders and strategists.

Such a result may prove to be ethically beneficial, as the AI in question could suggest avenues which promote quicker resolutions to conflicts at less cost to human life. Conversely, the opposite is equally possible, as artificial intelligence has the potential to promote an escalation of conflicts, as decision making loses it’s human intuition and reverts to cold calculations to deliver solutions to conflicts. In this same vein of thinking, the emergence of Lethal Autonomous Weapons Systems, or LAWS, has also greatly problematized AI military applicability. These weapons systems are designed to distinguish targets and engage targets independently of human intervention. Such systems, once thought to be in the realm of science fiction, are now increasing their prominence within militaries. The Chinese Air Force has been expanding the ability of AI to engage in dogfights through having fighter pilots train against AI, with the AI’s demonstrating a profound potential to learn from and implement tactics observed in its human opponents (Pickrell). Even more worryingly, in 2021 the US Air Force implemented AI into a kill chain, asking it to identify a target before lethal forced was used. The Air Force applauded the success of the implementation, arguing it allowed for a more concise process between identification and engagement (Miller, 2021). While the AI in this instance did not engage in lethal action directly, the desire to streamline the process of identification and engagement could reasonably result in an increased trust placed in the discretion of AI systems to utilize lethal force. Such a reality carries with it grave moral implications which necessitate an involvement from the international community.

The Necessity of Restraint and the Nuclear Precedent

The conference in the Netherlands reflects the aforementioned need for involvement in regulation of AI for military uses. However beneficial the conference may have been, more is certainly needed going forward in light of the destructive potential of these systems. Emerging literature in academia is advocating for regulation of military AI. Bérénice Boutin proposes that states should be held accountable before international law both for violations of international law owing to the deployment of military AI systems, as well as for failing to properly mandate legally binding limitations of the capacity of military AI. Regarding the development of AI systems among transnational corporations, Boutin believes these organizations are equally liable for responsible conduct of their technologies. Where state governments feel they are unable to regulate, international bodies such as the United Nations and European Union must intervene on behalf of the public interest (Boutin, 2022).

Many may scoff at the prospect of international regulation of these weapon systems, as the international community has notably fallen short on creating effective legislation regarding equally dire crises such as climate change. However, climate change may not be the only precedent to look to for confidence in international regulation. Nuclear weapons stand as one notable example where international regulation, discourse, and diplomacy, played an instrumental role for ensuring the security of all mankind. The Non-proliferation treaty, entering into force in the 1970s, was highly effective at the prevention of dozens of states obtaining nuclear weapons. The treaty created a respected international norm against the proliferation of nuclear weapons, and states which broke the norm to pursue nuclear programs such as North Korea became international pariah states (Abe, 2020). The 2015 Iran Nuclear deal stands as another example of an effective arms control treaty, with Iran voluntarily meeting its nuclear commitments prior to the Trump administrations’ withdrawal of the deal in 2018 (Robinson, 2022). However, drafting such treaties and fostering the international support necessary to respect them will take time, time which may not be in abundance with such a rapidly expanding sector. Should military AI expand to dangerous levels prior to prohibitions on its proliferation being established, policymakers could potentially look to the START treaties for inspiration. These treaties oversaw the voluntary reduction of nuclear capacity between the US and Russia, with both countries meeting their targets and significantly reducing the nuclear threat they posed to one another and the world at large. While Russia has recently withdrawn from the treaty as a reaction to western sanctions regarding to the Ukraine war, the hitherto success of the treaties demonstrates that diplomacy still remains as a an option for disarmament.


 The integration of AI into military technology is a foregone conclusion, as the process has already begun in earnest. Across the world, some of the most highly advanced militaries have begun to use AI to augment their forces and improve their proficiency. Not all of these initiatives are ethnically problematic, as logistical, and strategizing functions for AI may prove to save both human labour and human lives. However, a moralistic and human aspect must be kept at the forefront of military ethics, something AI seems ill prepared to accommodate. In light of this shortfall, the international community must be willing to step in, and hold delinquent actors and state accountable for the proliferation and irresponsible use of lethal AI. The international community demonstrated that diplomacy was able to keep the peace regarding nuclear weapons, and there should be an equally strong effort to impose similar controls over military AI owing to their destructive capacity. While such an effort will require dedication and resources to bring to fruition, the international community has succeeded before and can succeed again in mitigating the worst destructive tendencies among them.


Abe, N. (2020) “The NPT at Fifty: Successes and failures,” Journal for Peace and Nuclear Disarmament, 3(2), pp. 224–233. Available at:

Bistron, M. and Piotrowski, Z. (2021) “Artificial intelligence applications in military systems and their influence on sense of security of citizens,” Electronics, 10(7), p. 871. Available at:

Boutin, B. (2022) “State responsibility in relation to military applications of Artificial Intelligence,” Leiden Journal of International Law, 36(1), pp. 133–150. Available at:

Miller, A. (2021) AI algorithms deployed in kill chain target recognition, Air & Space Forces Magazine. Available at:,%E2%80%9Cfor%20automated%20target%20recognition.%E2%80%9D.

Pickrell, R. (2021) China says its fighter pilots are battling artificial-intelligence aircraft in simulated dogfights, and humans aren’t the only ones learning, Business Insider. Business Insider. Available at:

Robinson, K. (2022) What is the Iran Nuclear Deal? Council on Foreign Relations. Council on Foreign Relations. Available at: