From Project Liberty <[email protected]>
Subject What are the ethics of AI at war?
Date April 21, 2026 4:11 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

Will AI technology and autonomous weapons mean fewer deaths? Or will AI, programmed with its own ethics and optimized for speed, lead to more casualties in war?
As explored in last week’s newsletter [ [link removed] ], in part one of our series on AI and war, AI is permeating every aspect of military operations—from identifying targets to strike in Iran to mass surveillance in Gaza.
But AI is not a weapon itself. Instead, it operates as underlying technological infrastructure across all aspects of military operations. This makes regulating and governing AI in war more challenging, and it raises ethical questions about the specific circumstances of its use. Of course, this is true about AI outside of military use-cases, as well: AI is moving from the application layer to the infrastructure layer, powering every aspect of the internet as we know it.
In this week’s newsletter, we examine the legal and ethical implications of AI’s use.
Subscribe for free to receive a weekly newsletter that keeps you at the frontier of responsible tech.
The ethics of AI in war
An AI system is not neutral. It comes programmed with its own biases, values, and peculiarities [ [link removed] ]. When it’s trained to classify a risk that leads to an assassination via drone, “the utility maximization” logic of AI could undermine other ethical and humanitarian principles. For example, according to a 2025 paper [ [link removed] ] on AI governance in war zones, an AI system might calculate that eliminating a single militant justifies destroying every building within a 500-meter radius—killing dozens of civilians in the process.
Meaningful human oversight can identify those biases before they become body counts. But AI’s defining advantages—speed, efficiency, and automation—are precisely what eliminate the window for that oversight. The technology compresses the very space that would allow humans to correct and align it.
It’s conceivable that AI-enhanced weapons could reduce civilian casualties and save lives. But a RAND Corporation simulation [ [link removed] ] found that a swarm of 1,000 drones could increase civilian casualty rates to 38%. The speed, efficiency, and accuracy that make AI-enabled drones and other weapons valuable tools in war could lead to more strikes.
Then there’s the question of who’s responsible. When an AI generates a target, a human authorizes a strike, and a drone executes it, who is legally and ethically responsible for a wrongful death? The officer who made a decision in less than 20 seconds? The programmer who trained the model? The commander who approved the system’s deployment? The government itself?
Michael Groen [ [link removed] ], retired Lieutenant General and current Senior Advisor on Defense Policy at the American Security Fund (and a member of the Project Liberty Alliance [ [link removed] ]), told us:
“Meaningful human oversight of AI in military operations is grounded in the same ethical framework that has always governed warfare: jus ad bellum and jus in bello—justice in the decision to go to war, and just conduct within it. The core standards of military necessity, distinction, proportionality, and prevention of unnecessary suffering apply equally to autonomous systems as they do to human combatants.”
The ethical quandaries remain difficult even when lives are not on the line. Consider mass surveillance. Wartime surveillance technology can become peacetime infrastructure that the government rolls out elsewhere. Lavender-style population scoring, pattern-of-life analysis, and mass biometric tracking tend to migrate [ [link removed] ] into domestic law enforcement and border control. Earlier this year, the ACLU documented [ [link removed] ] how the government uses Palantir technology for ICE raids and border enforcement.
Groen added, “Military ethics offer no exemption for non-human systems employed outside these bounds. Any company operating in the defense space must operate within the international legal environment governing armed conflict—and if they seek to change those constraints, the path runs through legitimate legal and national authorities, not around them.”
What can be done
The Chemical Weapons Convention took 80 years [ [link removed] ]to move from first use to a comprehensive ban; 193 states [ [link removed] ]are now parties. AI presents a harder problem—it is not a weapon class but an infrastructure, spanning too many use cases for prohibition to be a viable frame. The more urgent question is whether the international community can establish meaningful constraints on use and accountability before the gap between capability and governance becomes irreversible.
Some of that work is already underway:
International treaties. In 2025, over 120 countries support negotiating a binding international treaty on lethal autonomous weapons [ [link removed] ]. UN General Assembly resolutions have passed with overwhelming majorities, and the Campaign to Stop Killer Robots coalition [ [link removed] ] spans 300 organizations across 70+ countries. Yet the nations developing these weapons fastest (the U.S., Russia, and Israel) are the ones refusing binding constraints.
Enforceable human control standards that comply with international laws. Under International Humanitarian Law [ [link removed] ] (IHL), the law that governs conduct during war, the parties to armed conflict—the people themselves—are responsible for complying with it and can be held accountable for proven violations. This might create some measure of accountability for the individuals who oversee such systems.
By design changes to the technology itself. AI tools should allow human users to comprehend and question their logic and conclusions. An article in [ [link removed] ]Humanitarian Law & Policy [ [link removed] ] by Wen Zhou, a legal advisor to the International Committee of the Red Cross, recommended specific design changes and restrictions (such as AI not being used in association with nuclear warheads) to limit the technology’s harmful effects.
Independent oversight and auditing of military AI. Oversight is extremely limited (or nonexistent), but, as we’ve explored in past newsletters [ [link removed] ], the ability for researchers and auditors to evaluate tools and platforms is necessary for accountability—particularly when the stakes are high.
None of this is sufficient. But the 80-year arc of chemical weapons governance is a reminder that the shape of what’s possible tends to shift faster than we expect, once enough people decide the current situation is unacceptable.
Accountability → Visibility
We’ve seen this pattern before: A powerful technology deployed faster than institutions can adapt, with the hardest questions deferred until the damage has been done. AI in war is just the sharpest edge of a larger problem. Systems optimized for speed and scale, with no one clearly accountable for the outcomes.
But where there’s visibility, there’s a chance for accountability. Organizations like Bellingcat are shining a spotlight [ [link removed] ] on possible war crimes. And as AI companies like Anthropic draw hard lines about what their technology can and cannot be used for, there might be a model for what accountability looks like when governments won’t impose it on themselves.
Other notable headlines
// 📱 According to an article in [ [link removed] ]The Markup [ [link removed] ], background checks to curb dating app violence are advancing in the California legislature. (Free).
// 🏛️ Anthropic’s new cybersecurity model could get it back in the government’s good graces, according to an article in [ [link removed] ]The Verge [ [link removed] ]. (Paywall).
// 📄 The CIA has created the first intelligence report written without humans, according to an article in [ [link removed] ]Semafor [ [link removed] ]. (Free).
// 👟 Sneaker company Allbirds is pivoting to AI. An article in [ [link removed] ]The New York Times [ [link removed] ] reported that after selling its business for $39 million last month, the company said it planned to buy computer chips and rebrand itself as NewBird AI. (Paywall).
// 🤖 New research confirms that LLMs often perform better when you encourage them. An article in [ [link removed] ]Platformer [ [link removed] ] makes the case for being nice to your chatbot. (Free).
// 🚛 AI is about to make the global e-waste crisis much worse, according to an article in [ [link removed] ]Rest of World [ [link removed] ]. As demand for AI hardware surges, much of the resulting waste will end up in non-Western countries. (Free).
// 🤔 Silicon Valley is spending millions to stop one of its own. Alex Bores, a former Palantir employee, helped pass one of the country’s toughest AI laws. Now Silicon Valley’s biggest names are trying to stop his rise to Congress, according to an article in [ [link removed] ]WIRED [ [link removed] ]. (Paywall).
// 👵 Where does our free time go in retirement? Too often, it’s social media, according to an article in [ [link removed] ]The Wall Street Journal [ [link removed] ]. (Paywall).
Partner news
// Center for International Governance Innovation’s Digital Policy Hub
April 28 | Waterloo, Ontario (Hybrid)
The Centre for International Governance Innovation [ [link removed] ] is hosting its Digital Policy Hub April 2026 Research Conference [ [link removed] ] at the CIGI campus in Waterloo, bringing together Digital Policy Hub fellows, alumni, and the wider community to present and discuss policy research on disruptive technologies shaping Canada and the world. Register here [ [link removed] ].
// Who gets a voice in shaping AI?
April 30 at 1pm ET | Virtual
All Tech Is Human [ [link removed] ] is teaming up with the Collective Intelligence Project [ [link removed] ] for a livestream conversation on democratic participation in AI governance. The discussion tackles a pressing question: Who has a say in how artificial intelligence is developed and governed? Register here [ [link removed] ].
// Apply for the Open Social Awards
Deadline: May 1
New_Public [ [link removed] ]’s Open Social Awards are accepting applications through May 1, 2026, recognizing teams building on open protocols with a €10,000 grand prize and two €5,000 excellence awards. Winners will be celebrated at the PublicSpaces Conference 2026 in Amsterdam, June 4–6. Learn more and apply here [ [link removed] ].
Thanks for reading! This post is public, so feel free to share it.

Unsubscribe [link removed]?
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a