Increased Use of AI in Military Operations and the Need for Legislative Guardrails
- Campbele Moon
- 22 hours ago
- 3 min read
Campbele Moon

U.S. Department of Defense
From Ukraine, to Gaza, to Iran, 21st century warfare has undeniably evolved. The rise of Artificial Intelligence (AI) has been seen not only in civilian life, but also on the front lines of military operations. While this integration of improved technology has altered how wars are fought today and how they will be fought in the future, the implementation of AI into modern warfare risks a destabilization of global security, unless NATO nations work together to create concrete regulations.
AI systems, when used for military strategy, have the ability to make decisions that have a firsthand effect on entire populations. This poses an important question: how can the United States and its allies ensure AI systems are not created with/used by individuals with inherent bias? While bias is human nature, in wartime, this can result in autonomous targeting of a single group of people, having drastic effects on certain populations. In wartime, regulations are not always followed; rules are broken, treaties are violated. This can occur in warfare during the AI era as well; opposing nations can create their software with inherent personal bias. The United Nations (UN) must work to develop standards for the usage of AI in wartime, similar to the World Health Organization’s (WHO) guidelines on using AI in health.
In Ukraine, as well as other war-torn countries, AI is commonly seen in the nation’s drones. Using AI, drones react to changing patterns and threats, making decisions autonomously. It is undeniable that warfare is evolving, making it more important than ever to regulate this rapid growth. Austrian Foreign Minister Alexander Schallenberg spoke on this, claiming “technology is moving ahead with racing speed, while politics are lagging behind.” When policy is lacking, legal gray areas occur. This absence of policy risks decisions being made without accountability or with unnecessary deaths, akin to what transpired during a recent U.S. Air Force Simulation.
Taking a similar stance, the International Committee of the Red Cross (ICRC) urged attendees of the UN Security Council to enact policies against automated killings at the hands of AI. Cordula Droege, Chief Legal Officer of the ICRC, warned attendees of the dangers of handing decisions over to AI, especially when making decisions about human lives. She claimed that lack of human control violated international law, since the decisions are now indiscriminate.
Organizations like the UN play a pivotal role in setting guidelines and initiating conversations about the use of AI in modern warfare. There are several guiding questions that need imminent answers: who is held accountable when war crimes are committed by AI-powered weapons, and how can it be confirmed that opposing nations are not developing their software to violate international laws without consequences? What is the next step in creating concrete legislation preventing catastrophe? These ethical and legal questions are being asked by organizations across the globe, but it is important that more large and powerful groups answer them. The lack of global regulation surrounding AI is astonishing and needs to be addressed, especially since the constantly improving technology is undeniably useful, but harmful if unregulated.
AI brings the blessing of reducing danger to human soldiers, creating more accurate targeting, and allowing for more efficient means of communication and strategy. However, it can also bring the curse of misuse and chances of starting a new arms race. The dangers of AI are potent, which warrants the need for regulation. If AI warfare capabilities continue to go unchecked, a new stage of warfare will one may not have the ability to win




Comments