Vol. XX · 14 May 2026 · Morning Edition

Live

Technology · 5 min read · 23 March 2026

From Sceptics to True Believers

How Project Maven converted the Pentagon and reshaped the future of warfare

E
Eleanor Whitfield

Senior Technology Correspondent · 23 March 2026 · 5 min read

Listen to this article — available with membership ($3/month)
From Sceptics to True BelieversPhoto: Clay Banks

The photograph that circulated through Pentagon corridors last autumn showed something that would have been unimaginable a decade ago: a four-star general, sleeves rolled to the elbow, hunched over a laptop running a machine learning model during a live operational briefing. The image was never meant to be public, but its existence tells us more about the current state of American military thinking than any number of official strategy documents. Project Maven, once the Pentagon's most controversial artificial intelligence programme, has become its most consequential.

When Google engineers revolted against Project Maven in 2018, forcing their employer to withdraw from the contract, the episode was widely interpreted as a victory for ethical technology. The engineers argued, with considerable moral clarity, that building AI tools for military targeting crossed a line that no amount of corporate revenue could justify. What followed was a period of soul-searching across Silicon Valley that briefly made it fashionable to question whether defence contracts were compatible with the technology industry's self-image as a force for human betterment.

That conversation, such as it was, is now emphatically over. The sceptics who once populated the Pentagon's own corridors — officers who doubted that machine learning could meaningfully improve the fog-of-war decision-making that has defined military operations since Clausewitz — have become, in the words of one senior defence official, “true believers.” The conversion has been driven not by ideology but by operational results that even the most technologically conservative military planners have found difficult to dismiss.

The trajectory of Project Maven offers a case study in how institutions absorb transformative technologies. In its earliest incarnation, the programme was narrowly focused on using computer vision to analyse drone surveillance footage — a task that human analysts were performing with increasing difficulty as the volume of data outstripped their capacity. The Pentagon was drowning in imagery, collecting far more than its analysts could process, and Maven was conceived as a triage tool: a way to sort the overwhelming from the merely important.

What happened next followed a pattern familiar to historians of military technology. A tool designed for one purpose proved adaptable to many others. Maven's capabilities expanded from image analysis to signals intelligence fusion, from pattern recognition to predictive modelling, from surveillance support to something approaching tactical recommendation. Each expansion was incremental, each justified by operational necessity, and each moved the programme further from its original, relatively modest ambitions.

The current generation of military AI applications — and Maven is now merely the most prominent among several dozen programmes across the United States, China, Israel, and the United Kingdom — raises questions that the 2018 Google walkout never adequately addressed. The engineers who protested were responding to a specific concern about autonomous targeting. But the more profound transformation is not about whether AI pulls the trigger; it is about whether AI increasingly determines what constitutes a target in the first place.

This distinction matters enormously, and it is one that the public discourse has largely failed to grasp. An AI system that identifies a building as a likely weapons storage facility, based on patterns derived from thousands of previous intelligence assessments, is not making a kill decision. But it is shaping the informational environment in which kill decisions are made. It is, in effect, constructing the reality that human commanders then act upon. The human remains in the loop, but the loop itself has been redesigned.

China's approach to military AI, as documented in publications from the People's Liberation Army's Academy of Military Science, proceeds from a fundamentally different philosophical framework. Where American military AI development has been shaped — and in some ways constrained — by public debate about ethics and autonomy, China's programme operates under what PLA strategists call “intelligentised warfare,” a doctrine that treats AI integration not as a supplement to human decision-making but as its logical successor. The implications of this divergence will likely define the military balance of the coming decades.

Israel's experience in Gaza has added another dimension to this global picture, one that has generated fierce controversy. The Israeli Defence Forces' use of AI-assisted targeting systems — including a programme reportedly known as “Gospel” — has drawn scrutiny from human rights organisations who argue that algorithmic targeting in densely populated urban environments creates unacceptable risks of civilian casualties. The IDF maintains that AI has improved targeting precision; its critics contend that precision without proportionality is a distinction without moral meaning.

The deeper question, and the one that Project Maven's journey from controversial experiment to institutional orthodoxy forces us to confront, is whether the integration of artificial intelligence into military operations represents a genuine revolution or merely the latest iteration of a pattern as old as warfare itself. Every significant military technology — from the longbow to the atomic bomb — has been accompanied by claims that it would fundamentally alter the nature of conflict. Some did. Most merely altered its instruments.

What distinguishes the current moment is not the technology itself but the speed of its integration and the breadth of its application. Previous military technologies were typically developed over decades and adopted over generations. AI is being developed in months and deployed in years, across domains that span intelligence, logistics, communications, and — increasingly — combat operations. The institutional capacity to absorb and govern this transformation has not kept pace with the transformation itself.

The Pentagon officials who once doubted Project Maven now speak of artificial intelligence with the fervour of the recently converted. History suggests that the fervour of converts is precisely what should concern us most. The question is not whether AI will transform warfare — it already has. The question is whether the institutions responsible for waging war have developed the wisdom to govern a technology that moves faster than their capacity for reflection.

Continue reading, on us

Sign up free and get: daily audio briefings on Telegram & WhatsApp, unlimited articles, audiobooks, and exclusive books. Free for 10 days. No credit card required.

Create Free Account

Already have an account? Sign in

Share

Related Stories

Technology

The Enterprise AI Arms Race Moves East

By Eleanor Whitfield · 5 min read

The Kelford Press Reading Room

Audio briefings · Unlimited articles · Audiobooks · Books · Digest · Journal

$3/moFree for 3 months

Join Waitlist