AI Ethics is a hot topic in the artificial intelligence world. It features in keynote speeches at major conferences and spawns entire dedicated safety teams at large companies—all the while, government, industry and academic leaders make a point of how hard they’re working to make sure AI proceeds in an ethical way. Ostensibly, this is a response to well-founded fears about the technology’s possible (and proven) downsides, like its threat to the job market, or potential for harm in mental health settings.
But these conversations omit an honest consideration of the scariest application of all: the use of artificial intelligence to build weapons. AI’s use in the military doesn’t receive as much media attention as other industries, but that does not mean it isn’t happening. And if AI is really so capable that it threatens to replace our livelihoods, how much more threatening is it to imagine that same capability directed deliberately at causing death and destruction?
Such capability is, as of right now, in its nascent stages. The large majority of Earth’s existing weaponry has nothing to do with artificial intelligence. But the US and, insofar as one can tell, China, are betting that the next generation of weapons will. AI could soon reshape military power in several key areas, including intelligence analysis, vehicle guidance and control, and target acquisition–all areas where machine learning has improved significantly in recent years.
Drones are an obvious application for AI guidance and control, but in order to be deployed there, AI still needs to solve some further technical challenges. Firstly, it needs to become more functional on constrained hardware—meaning devices with limited battery life and processing power (see: drones, which only have enough energy to remain in flight for about 30 minutes, and can therefore only allocate a small percentage of power to data processing). Secondly, it needs to be integrated into low-latency systems, meaning those with near-instant reaction times. Thirdly, it needs to be able to work with noisy and incomplete data, because that’s the only sort of data you can get in real-time on the battlefield. If a large, state of the art computer vision model was shown a clean video of a busy city street, it could easily locate the head of every human; but try the same thing on a model small enough to fit on drone hardware, using low-quality footage, and it probably wouldn’t be usable. It’s possible this will soon change though. All of these three hurdles are active areas of research, and already, automatic target acquisition systems are starting to be tested on the battlefield—which is the only real way to find out if a new military technology will work.
Ukraine’s Operation SpiderWeb, of June 2025, constituted one of these tests. It smuggled over 100 drones across the Russian border, and launched them to strike Russian long-range bomber aircraft parked in military bases. Zelensky claimed the attack hit 41 aircraft worth $7 billion. Putin has not commented on numbers, and claims the operation was a terrorist attack against civilians—but the magnitude of his response confirms the operation was a military success, meaning it inflicted a lot of damage.
Normally, drones are piloted by streaming video feeds and sensor data back to a human pilot in base, who then moves a joy stick, which sends its own signal back saying where to move and what to fire at. Wireless signals are notoriously unreliable though—especially with a moving source—and can be jammed by the enemy. Operation Spiderweb still mostly used remote human pilots, but its drones made some decisions autonomously, like navigation and small adjustments in position. Also, if the connection was completely lost, the drones had a fall-back in the form of an on-board targeting system.
Exactly how much of the operation was automated isn’t clear; Ukraine may want to overstate the role of AI to make its military seem more advanced, but it is part of a series of developments towards increased drone autonomy. Drones have been used extensively by both Ukraine and Russia, and they’re clearly going to be a key part of future conflicts. At present, “jamming” them is one of the best defenses because it can take out multiple drones at a time. The tactic involves using radio frequency to interfere with the signals between drone and pilot. Once you mess up the right frequency band, everything using that band in the area will stop working. If drones could work fully autonomously—meaning they fly to the target area, find the target, aim accurately and fire—then they would need pretty much no external signal at all. The only defense would be to shoot them down physically, one at a time (difficult if there’s a large swarm), using missiles that are more expensive than the drones themselves. This could be the sort of game-changing technology that big military powers have their eyes on.
Meanwhile, on the site of the world’s second highest human-inflicted death rate, the Israeli Defense Force is spearheading a different area of AI militarization. Unlike the Ukraine War, between two advanced state militaries, Gaza sees a powerful state military on one side and a small paramilitary group, plus a large unarmed civilian population, on the other. This means the IDF doesn’t need AI for front-line combat but for intense surveillance, so they can track and kill anyone suspected of being a threat. (The IDF may have killed more than 60,000 Palestinians in Gaza, but there are still over 2 million people alive there.)
Keeping track of the movements of even 10 percent of Gaza’s population means processing a lot of data: cell phone locations, video footage, tapped phone lines, etc. It is only possible with a lot of automation, and that’s precisely what the IDF has used. One automated system, Lavender, identifies potential Hamas operatives from a database, which can then be given a cursory approval by a human before an attack is ordered. The system is trained using known examples of Gazans with some involvement in Hamas or Palestinian Islamic Jihad, and learns general patterns in their behaviour. Then, when it finds new people exhibiting the same rough patterns, it marks them as targets. A senior IDF officer told the Israeli-Palestinian publication +972 that he has “much more trust” in Lavender than in human soldiers, because the technology takes emotion out of the equation: “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier,”
This technology can then be coupled with other automated systems, like the grotesquely named “Where’s Daddy?”, which tracks the moment a target enters a given location, normally their family home, where they can be easily killed. As the IDF source put it: “It’s much easier to bomb a family home.” It’s possible that “Where’s Daddy?” also uses machine learning to predict things like future visits or how long they’ll last. Whatever the exact machine learning used in Lavender and related systems, it’s nothing like ChatGPT (the techniques are much older), but it still enables military action at an unprecedented scale. In the past, it was impossible for a state to monitor the movements of even a fraction of a small country, which made it very difficult to defeat a population that was broadly united against it. But as machines get better at recommending targets, the balance shifts more in favor of a wealthy, tech-advanced minority, and away from a much larger, less well-equipped public.
So AI is starting to be used at various parts of the “kill chain.” Lavender and Where’s Daddy? are for the first two steps: identify and find; while the drone warfare seen in Ukraine is about the latter two: dispatch and destroy. You might wonder what the big deal is, within the omnipresent atrocities of war. Whether someone is identified by a machine or a human, shot by a drone or a soldier, they die in the same way—and while the machine can make mistakes, so can the human. Even if you think in such utilitarian terms, there are a few reasons to be concerned about the current weaponization of artificial intelligence.
Firstly, its use could outstrip frameworks for responsible use. After the invention of the atomic bomb, it took 20 years before negotiation even started on a non-proliferation treaty, and several more decades to enact the Test Ban Treaty. We’re not likely to see a single AI weapon as significant as the nuclear bomb, but cumulatively it may keep changing what weapons are capable of—and that process may happen faster than we can agree on new laws or treaties.
Secondly, it could make everyone both more powerful and more vulnerable: a situation known in military theory as the capability-vulnerability paradox. For example, the use of oil made WW2 militaries far more capable: they had no choice but to transition to using it—in tanks, ships, aircraft and bomb production—otherwise, they’d be left behind. But it also meant that if they ran out of oil, everything would fall apart. For a military to use AI, it needs to ensure a steady supply of computing infrastructure, well-trained workers who understand it, and lots of data. If it loses even one of these things, then the entire system could break. When both sides trade vulnerability for increased capability, the game-theoretic calculation shifts. Now, a first strike looks more attractive, because you’re able to cripple the opponent before they can respond.
Finally, the mere uncertainty of how AI might develop in the future creates a general instability in geopolitical relations. It makes everyone more suspicious and prone to short-term thinking, which is already something that pushes toward escalation.
War and geopolitics are extremely complicated, and the inclusion of artificial intelligence in the equation makes them even harder to understand. We should be devoting a lot of effort to figuring out the right principles to operate by and trying to get some idea of how we want this to play out. This is why it’s so disappointing that the movement billing itself ‘AI Ethics’ makes essentially no mention of the topic. The problems it does fixate on include the idea that AI systems might learn to deceive or disobey humans, and that if we don’t work hard to ‘align’ them, they may start to intentionally harm us. The plausibility of this threat is still a matter of debate. Other AI Ethics subfields do address clear problems, but of comparably minor importance—like how chatbots can reflect societal biases, for example, by assuming a doctor is a man or perpetuating stereotypes. Other focuses aren’t even ethical problems at all, in the sense that they don’t involve questioning what’s right or wrong—like making sure AI medical diagnoses are accurate, or that driverless cars don’t crash.
The movement does sometimes talk about preventing misuse by “bad actors,” which at least acknowledges that AI can enable humans to do bad things to each other. But this doesn’t come close to the heart of the matter. Firstly, it focuses almost entirely on chatbots, constructing semi-plausible scenarios in which they allow an individual to do terrible damage—e.g. by helping them make a bioweapon—even though autonomous vehicle control and object detection have much more direct uses in weaponry. Secondly, these so-called bad actors, while not specified in detail, clearly do not include, for example, the US Department of Defense, meaning the group in the world who is spending more money than anyone else on AI weaponry is ignored.
If you take a look at DoD spending, it may provide a partial explanation for the glaring hole in AI ethics discourse. As it turns out, plenty of AI researchers and practitioners receive military funding. Depending on how you count, it may be that many do—the cultural divide between Silicon Valley and the military industrial complex doesn’t stop the former from accepting tens, if not hundreds of billions of dollars, from the latter. Some of this comes in the form of DoD contracts, those traditionally gobbled up by the likes of Lockheed Martin. Giants such as Amazon and Microsoft receive DoD contracts, as well as Palantir and a host of military AI startups.
Money also comes in the form of research grants to universities and other institutes. Carnegie Mellon University has over 20 contracts with the DoD, several of which fund AI research, including, over $3 million from the US Army’s Artificial Intelligence Task Force to research reconnaissance tools to increase soldiers’ “lethality and survivability,” about $5 million vaguely pertaining to “AI fusion”, and about $13.4 million to “fortify the nation’s security and defense”. In total, the DoD spent over $3bn on basic research in 2022, and their budget has continued to grow ever since. What exact topics they’re interested in is classified, to prevent other countries inferring their strategy, but given their prioritization of the area, it’s likely that AI research is a substantial chunk of it. And that portion really is just for pure research–there’s separate $80bn for deploying it into military applications–a lot of the stuff in the big AI conferences could be funded as basic research here. China spends a third as much as the US, and has an even tighter lid on the details, but again one would assume that a significant portion goes to AI research.
Where does this money end up? It’s certainly possible to find AI research papers, both from universities and industry labs, with explicit support from DARPA (an agency under the DoD), and there may be many more that downplay the connection. There’s a requirement in most conferences that authors of new AI research comment on the potential harms of what they’re building. If the technology may be used in a weapon, then the correct answers to that question are “could be used to kill people”, “could exacerbate violent state surveillance and ethnic cleansing”, or “could contribute to the AI arms race and the destabilization of geopolitical relations”. But you’re hardly going to say anything so challenging to your funders. If your research was specifically funded by the military, you’re hardly going to acknowledge mass surveillance, death, and destruction as potential harms—those outcomes are the whole point.
Another key element in understanding why AI ethics focuses on what it does, is the broader beliefs of the small number of people that drive the movement. Much of the AI world is led by Silicon Valley, especially OpenAI and Anthropic, and so AI Ethics has been infused with their particular worldview. That means it follows a few core dogmas: that science and technological progress can solve nearly all of today’s problems; that AI, and specifically LLMs, by rapidly accelerating science and technology will radically transform most aspects of life in the next few years; and that it is essential that the US leads this revolution, because it is the only country that will use this untold power benevolently.
Any problems that don’t resonate with this framework are sidelined—especially if they challenge the idea that the United States is a unilateral force for good. Under this worldview, it’s palatable to discuss a misdiagnosed medical scan, but it’s much more uncomfortable to consider the IDF using machine learning and US-provided computing services to plan bombing family homes. While one can make the former out to be very concerning, it doesn’t challenge any of the core dogmas, whereas the latter most certainly does.
The favorite dystopian danger fantasy is that of a rogue super intelligence, one that pursues its objective so effectively that it starts to kill humans. (For example, the AI’s instruction is to eradicate cancer and it achieves this by exterminating the human race.) These scenarios play neatly into the belief that LLM-based products are immensely powerful—and casts their creators as the noble guardians of this great power. Thank god it is us, the good guys, who are the best in the world at building LLMs, because we will take care to do it justly, unlike those other guys. Of course, ‘other guys’ here essentially means China: the only other country able to produce world-leading LLMs.
When Chinese company DeepSeek released its R1 model in January 2025, many in SV reacted as if it was an act of aggression by the Chinese Communist Party. There was a slew of calls for the US to up its game in the AI arms race, and Meta even assembled emergency “war rooms” to analyze where they’d been outdone. Normal market competition was reframed as SV engineers being tasked with defending democracy itself.
This gives another perspective on the “bad actors” mentioned above. Restricting GPU exports or banning DeepSeek can now actually be sold as AI ethics: a way to ensure freedom triumphs.
This comes from the belief that whoever controls LLMs will be positioned to control everything else. The same logic is used by those hawkish enough to argue directly for the need to build AI weapons: China is doing it, they say, they’re going to defeat us,unless we do too. Anduril CEO Palmer Luckey gave a TED talk in April 2025 titled “The AI Arsenal That Could Stop WWIII,” which described a doomsday scenario of China conducting a bolt-from-the-blue, full-scale invasion of Taiwan. Luckey argued that this could soon be a reality, unless the U.S. fields a vast AI-driven arsenal to repel China’s attack. On the face of it, he makes a convincing case. While there are different opinions on how likely such a doomsday scenario is, there’s broad agreement it’s a possibility.. The Chinese naval expansion under Xi Jinping, coupled with his consistent position on the need for reunification, is indeed scary–it is comforting, and may seem sensible to beef up militarily in response. But where does this path lead? Palantir CEO Alex Karp recently told CNBC in regards to the AI arms race: “Either we will win, or China will win.” Karp was expressing the same pot-commitment to the zero sum game—but also implying that this race has a finish line; that one day, the US will win and the whole problem will be resolved.
Anthropic CEO Dario Amodei seems to believe something similar, saying that democracies “may be able to parlay their AI superiority into a durable advantage”, so that “our worst adversaries.. give up competing with democracies in order to receive all the benefits and not fight a superior foe.” This is a very optimistic view of how arms races work, and betrays a lack of understanding of China’s perspective. Central to Chinese identity and values is their historical ‘Century of Humiliation’, beginning with the Opium Wars in the 1800s, when Western naval superiority forcefully imposed unfair and harmful trade agreements. The idea of accepting anything like a repeat round is not something China would ever do.
Unfortunately, this reality is lost on current policy makers. When the Biden administration restricted GPU exports to China in 2022, it primarily cited national security reasons—implying China might use them for game-changing weapons. But the restrictions targeted chips used for LLMs, which have little near-term military utility. The effect was to antagonize China, trigger memories of the Opium Wars, and tilt the market toward U.S. firms. Eventually, China can simply produce its own chips or design around weaker ones, as DeepSeek already did. The strategy buys at most a few years, while ratcheting up tensions.
That “short-termism” is the failure of all these perspectives. Each accepts a future of arms races and antagonism, and ignores the one outcome that isn’t terrible: a peaceful resolution. Through compromise and careful diplomatic maneuvering, it remains possible for Taiwan and the PRC t resume dialogue and cool off, and for China–West relations to stabilize. This is what we should be devoting our energy to achieving. Instead the AI ethics community sits comfortably inside the worldview of US supremacy–without getting its hands dirty by actually mentioning the military, it largely reflects the techno-hawkism of Luckey and Karp. .
In the end then, the silence of the AI Ethics movement towards its burgeoning use in the military is unsurprising. The movement doesn’t say anything controversial to Washington (including the military industrial complex), because that’s a source of money, as well as an invaluable stamp of importance. It’s fine—even encouraged—to make veiled digs at China, Russia or North Korea, at the “bad actors” it sometimes refers to, but otherwise the industry avoids anything “political.” It also mostly frames the issues as centered on LLMs, because it wants to paint the tech products of its leaders as pivotally important in all respects. This then makes it a bit awkward to bring in military applications because it’s pretty obvious that LLMs have little current military value.
I personally came to AI research nearly ten years ago, from a deep curiosity about the nature of the mind and the self. At that time it was still a somewhat fringe subject, and as the field exploded into public awareness, I’ve been horrified to watch it intertwine with the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. To find the right way forward, we need to think much more deeply about where we’re going and what our values are. We need an authentic AI Ethics movement that questions the forces and assumptions shaping current development, rather than imbibing the views passed down from a few, often misguided, leaders.
There are many new difficult questions facing the AI world, with likely many more on the way, and so far we have barely found the courage to even ask them.