top of page

Anduril & The Autonomous Weapons Dilemma

  • Jadon Teh
  • 13 hours ago
  • 2 min read

Jadon Teh

Albert Einstein once warned that "it has become appallingly obvious that our technology has exceeded our humanity." The company Anduril Industries embodies this warning perfectly. Anduril represents everything innovative and terrifying in the realm of autonomous weapons all at once. Founded in 2017 by Palmer Luckey, the company builds advanced AI-powered autonomous weapons. They use the open software platform Lattice, which connects drones, towers, and sensors to track and detect targets without human control. Their Ghost and Fury drones operate in swarms, making coordinated decisions faster than any human could manage. In ten years, these systems are likely to be integrated across all military branches and potentially widespread amongst the big five defense contractors (Lockheed Martin, RTX, Boeing, Northrop Grumman, and General Dynamics). The progression from surveillance, to decision-making, to lethal action, is clear and unavoidable.  Each step removes another layer of human control, and that's where ethical concerns arise. From a utilitarian perspective, autonomous weapons might reduce casualties through precision, but the implication of their use is the dehumanization of warfare.


The big question is who is responsible when an autonomous weapon fails? If an Anduril drone misidentifies a school bus filled with teenagers as a military convoy, who do we hold accountable? The commanding officer, the software engineer whose code produced the flawed target designation, the CEO, or the Department of Defense? This question is quite unsettling and cannot be ignored. With the acceleration of autonomous weapon usage, we're creating an accountability gap that could be exploited if this technology falls into the wrong hands. The technical risks are equally concerning; these systems can be hacked, adversarial AI can trick machine learning models, and software bugs could cause catastrophic failures. We've already seen what happens with automated systems in lower-stakes environments. The 2010 stock market Flash Crash wiped out nearly $1 trillion in minutes because automated trading systems entered a self-reinforcing feedback loop, each reacting to the other's outputs faster than any human could intervene (Kirilenko & Lo, 2013). Imagine that scenario with weapons.


Autonomous weapons aren't inherently evil, and there is no suggestion here that all military AI should be banned. There needs to be structures in place as a precaution for mishaps. Another chilling reality that may be on the horizon as autonomous weapons become widespread is the labeling of incidents as an AI mistake. Failures such as these cannot be afforded. Anyone claiming complete certainty about how these systems will perform has not engaged honestly with the evidence. Technology is outpacing our ethical frameworks, legal systems, as well as our ability to understand the weight of the consequences. Rather than waiting to react, meaningful human control requirements must be codified into law rather than left to voluntary military guidelines, and an internationally binding legal instrument modeled on the 1967 Outer Space Treaty must be negotiated to eliminate the accountability gap before it is exploited in combat (Lee, 2024). Pre-deployment oversight must be institutionalized through decision audit trails, explainable AI requirements, and real-time monitoring systems before we reach a point of no return (Suleiman, 2024).

bottom of page