Lethal Algorithms: Humanity in Automated Warfare

By Jiner Huang ‘28

From gunpowder to nuclear weapons, technological advancements have reshaped the nature of conflict. Today, artificial intelligence (AI) puts us at the edge of a new revolution, in which machines make life-or-death decisions without human control. Lethal autonomous weapons, also known as “slaughterbots” or “killer robots,” are pre-programmed and employ AI to identify and kill target profiles without human intervention. Whereas even unmanned shooter drones require remote decision-making by a human operator, autonomous weapons make decisions based on their algorithms alone. 

In the ongoing Russia-Ukraine War, kills by autonomous drones currently account for over 50% of battlefield casualties, per Congressman Pat Harrigan’s statement. General Valerii Zaluzhnyi, Ukraine’s former military Commander-in-Chief, noted that the low-costs and open-source software of many weapon components have allowed local startups to rapidly develop a variety of autonomous target drones, which are increasingly being deployed in the current conflict.

Defense analysts have identified multiple ways in which lethal autonomous weapons enhance military force and efficiency. The Department of Defense’s Unmanned Systems Roadmap: 2007-2032 reasons that robots are better suited than humans for “dull, dirty or dangerous missions.” By replacing human warfighters with automated robots, militaries can not only reduce human casualties but also improve operational efficiency, as human resources can be spread out over more missions. 

Furthermore, the deployal of autonomous weaponry could create substantial long-term savings. David Francis, in his 2013 Fiscal Times article, cited Department of Defense figures showing that “each soldier in Afghanistan costs the Pentagon roughly $850,000 per year,” while “the TALON robot—a small rover that can be outfitted with weapons, costs $230,000.” 

The issue with the greatest controversy, though, is one of ethics. Proponents of integrating automated fighters into military conflicts argue that these robots are ethically preferable to human fighters; roboticist Ronald C. Arkin, for example, believes that autonomous robots will be able to act more “humanely” on the battlefield because they do not have self-preservation instincts, potentially eliminating the “shoot first, ask questions later” attitude. Autonomous weapons make decisions unclouded by emotions like fear and hysteria, allowing their systems to process much more sensory information. Arkin’s study also found that robots were more reliable in reporting ethical infractions than humans in teams composed of both. 

Meanwhile, the greatest argument against employing autonomous weapons is that it is not morally justifiable. One of the most important rules of armed conflict is the Principle of Distinction, which mandates that militaries must identify and differentiate between civilians and combatants in any conflict; autonomous systems will face challenges in identifying targets, leading to civilian casualties and collateral damage. Moreover, the principle of jus in bellum, a fundamental condition of international humanitarian law, requires that someone is held responsible for every civilian death. Yet because automated targeting systems make decisions “on their own,” it becomes impossible to enforce accountability for casualties. Analogously, if a self-driving car violates the speed limit, it would be difficult to ticket the passengers of the car, much less the car itself. Thus, opponents of lethal autonomous weapons argue that these systems do not meet jus in bellum requirements. 

Algorithmic decision-making creates additional concerns related to battlefield logistics. By allowing weapons to follow the trajectory of pre-programmed software, attacks become cheaper and more efficient at the cost of larger-scale conflicts and casualties. Research by RAND Corporation found that “widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.” Threats of conflict escalation, proliferated attacks, unpredictability, and even mass destruction will destabilize military precedents globally. 

In July 2015, an open letter from the Future of Life Institute called for a ban on autonomous weapons, warning that “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, and the stakes are high.” The letter featured an impressive list of signatories, including Elon Musk and Stephen Hawking. 

However, the UN Secretary General and the International Committee on the Red Cross (ICRC) call for states to negotiate a treaty on autonomous weapons systems by 2026 instead of prohibiting all military applications of AI. After all, many current applications of military AI (automated missile defense, for one, is used by thirteen countries) do not raise pressing concerns. The ICRC recommends that states adopt legally binding regulations for autonomous weapons, including prohibiting human targets, restricting unpredictability, and instating some degree of human control. 

Automated Decision Research found that in 2025, an overwhelming majority of states (129 out of 195) supported legally binding negotiations. Fifty-four countries’ positions remain undeclared, while twelve have declared opposition to negotiations, including the USA, India, and Russia. Though countries still disagree on the extent to which autonomous weapons should maintain their “autonomy,” most agree that the issue cannot be postponed any longer. Nations’ willingness to collaborate on establishing binding agreements and treaties in the following years will determine the future of not only automated warfare but also our collective security. 

Emlyn Joseph