← Back to Insights

geopolitics

AI Goes to War: Lessons from Operation Absolute Resolve

By Moussa Rahmouni10 April 202645 min read

At 02:14 Eastern Standard Time on January 3, 2026, the first wave of American aircraft crossed into Venezuelan airspace. Seventy-two hours later, Nicolás Maduro was in a restraint chair aboard a C-17 inbound to a jurisdiction that has not yet been officially disclosed, and the Pentagon was releasing a terse statement declaring Operation Absolute Resolve's objectives "complete." Those are the public facts. They are, in most respects, unremarkable by the standards of post-2001 American expeditionary warfare. What was new — what will be studied in war colleges for a decade — was not the ordnance, the tanker bridge, or even the HUMINT network that located the safehouse outside Caracas. What was new was the compression.

Absolute Resolve is the first publicly acknowledged large-scale joint operation in which generative artificial intelligence tools — specifically, large language models, computer vision pipelines derived from Project Maven, and Palantir's intelligence-fusion stack — were integrated into the targeting cycle at multiple levels simultaneously. AI did not fly the F-35Bs off the USS Wasp. AI did not designate the final target. AI did not, by any credible account, release a weapon. But AI was in the room, at the console, and in the analyst's ear, for nearly every decision that mattered. And the cumulative effect of that embedding was to shrink a planning cycle that would have taken seven to ten days under the doctrine of 2015 into a window that closed in under seventy-two hours.

This is an article about that compression, and what it means. It is not a triumphalist account. It is not an indictment. The question of whether Absolute Resolve was a good operation — strategically wise, legally defensible, morally tolerable — is one we will return to near the end, and we will not try to settle it. Our subject is narrower and, for the defense industry and the AI sector that now overlaps with it, more urgent: what does the operational record of Absolute Resolve, as it can be reconstructed from public sources and informed analysis, tell us about how AI is actually being used in American warfighting, who the suppliers are, what the internal politics of those suppliers look like in the spring of 2026, and what the structural consequences are for civilian AI labs, European sovereign defense, and the broader acquisition architecture of the Pentagon?

The thesis, stated plainly up front: AI did not win Operation Absolute Resolve. What AI did was alter the OODA loop — the Observe, Orient, Decide, Act cycle that John Boyd made canonical — in ways that changed the calculus of feasibility for the planners at SOUTHCOM, the Joint Staff, and CENTCOM's observing cell. It also changed the supplier landscape of American defense in ways that will outlast any specific administration. The vendors who were in the tent for Absolute Resolve — Palantir, Anthropic, Scale AI, Anduril, and a shorter list of less-public contractors — now occupy a position analogous to the one Lockheed, Northrop, and Raytheon consolidated during the Reagan build-up. That position is not a product line. It is not even a platform. It is a layer of the kill chain. And whoever owns that layer for the next decade will own the most strategic position in defense technology since the microchip itself.

That is a significant claim, and the sections that follow are an attempt to justify it carefully.

Operational Reconstruction: What We Know, What We Can Infer

Let us begin with the facts as they are currently available in the public record, acknowledging upfront that the after-action report is classified and likely to remain so for at least a decade. What follows is drawn from SOUTHCOM and Department of Defense press releases, open-source intelligence aggregators, reporting by Reuters, the Associated Press, Defense One, Breaking Defense, and Aviation Week, as well as statements made on the record by former officials in the weeks since the operation concluded.

The operation's opening strike package, launched in the final hours of January 2 and the first hours of January 3, involved more than 150 aircraft. The composition, as best as can be reconstructed, included:

  • A flight of F-35B Lightning IIs off the USS Wasp Amphibious Ready Group, which had been quietly repositioned into the southern Caribbean over the Christmas period under the cover of a routine exercise rotation.
  • F-22 Raptors flown out of Eglin Air Force Base in Florida, providing air superiority and, in at least one reported engagement, intercepting a pair of Venezuelan Su-30s over the Gulf of Venezuela.
  • F-15E Strike Eagles from Seymour Johnson, tasked with standoff precision strikes against integrated air defense nodes.
  • B-1B Lancers and, in a rarer tasking, two B-2 Spirits flown from Whiteman, which appear to have been assigned to hardened command-and-control targets in the Caracas basin.
  • MQ-9 Reaper drones flying persistent ISR and, in the later phases, kinetic support.
  • RC-135 Rivet Joint SIGINT platforms, an E-3 Sentry AWACS, and a constellation of KC-46 and KC-135 tankers sustaining the orbit.

The operational window was declared at seventy-two hours. The strategic objective, as articulated by the administration in its after-action briefing on January 6, was "the removal of the Maduro regime as a coherent command authority and the establishment of conditions for a legitimate political transition." The tactical objective, which preceded and enabled the strategic one, was the neutralization of Venezuelan integrated air defenses, the destruction or isolation of key regime command nodes, and — ultimately, at 19:40 EST on January 4, 2026 — the capture of Nicolás Maduro himself at a safehouse approximately forty kilometers outside Caracas.

Casualty figures remain classified. Independent Venezuelan sources have offered estimates ranging from the low hundreds to more than a thousand, figures which cannot be verified at the time of writing. American casualties, per Pentagon statements, were "minimal" — a word that in recent usage has meant fewer than ten KIA, though SOUTHCOM has declined to confirm.

What is striking about the reconstruction is not the aircraft mix, which is conventional for a high-end American strike operation, nor the tempo, which is aggressive but not unprecedented — Operation El Dorado Canyon in 1986 and the opening night of Desert Storm in 1991 were comparably intense within their specific windows. What is striking is the compression of the planning cycle. Multiple former officials have noted, both on and off the record, that the planning window for an operation of this complexity under the pre-AI doctrine would have been measured in weeks — the target folder build alone, for a country with Venezuela's topography, air defense density, and regime dispersal patterns, would conventionally take a full joint targeting cycle of seven to ten days. Absolute Resolve compressed that into something closer to forty-eight hours of active planning, with a preparatory ISR and intelligence-fusion posture that had been running in parallel since the previous autumn.

That compression is the story. And the compression was made possible by AI.

The AI Stack, Vendor by Vendor

Four vendors and one government program office are at the center of the AI stack that the Department of Defense has publicly acknowledged using in Absolute Resolve. Each occupies a distinct position in the targeting cycle, and each carries distinct contractual, political, and ethical baggage. Let us take them in turn.

Project Maven and the Algorithmic Warfare Cross-Functional Team

Project Maven — formally, the Algorithmic Warfare Cross-Functional Team, now under the Chief Digital and Artificial Intelligence Office (DoD's CDAO) since the 2022 reorganization — is the oldest and, in many ways, the most mature element of the American military AI stack. It was stood up in 2017 under then-Deputy Secretary Robert Work with a deliberately narrow mandate: apply computer vision techniques to full-motion video feeds from ISR platforms in order to automate the labor-intensive task of object detection, classification, and tracking.

The problem Maven was created to solve was a straightforward one of scale. By the mid-2010s, the amount of full-motion video being generated by MQ-1, MQ-9, Gorgon Stare, and other ISR systems had massively outpaced the capacity of human analysts to watch it. Analysts were drowning in footage; the bottleneck was not sensor coverage but the human eyeball hours required to turn pixels into intelligence. Maven's original objective was to use convolutional neural networks to pre-process that footage, flagging vehicles, persons, and activity patterns of interest and presenting them to analysts in a triaged queue rather than as raw hours of low-contrast desert.

The program became politically famous in 2018 when Google, which had been a significant technical contractor on the early Maven pipeline, publicly withdrew from the contract after an internal employee revolt. We will return to that episode in the Anthropic section below, because the 2018 Google walkout is one of the most important data points for understanding the industry posture of 2026. For now, the technical point: Google's withdrawal did not kill Maven. It accelerated a diversification of Maven's technical base across multiple vendors, including Palantir, Clarifai, ECS Federal, and a long tail of smaller CV shops. By 2022, Maven was no longer a single contract; it was a program line, and the underlying CV stack had been hardened, re-engineered, and made more modular.

By 2026, Maven's role in Absolute Resolve was, in the dry phrasing of the CDAO's public materials, "the primary computer-vision support layer for ISR-derived targeting workflows." Translated: when an MQ-9 orbit over the Caracas basin streamed video to a fusion cell, Maven's pipeline was doing the first pass of object detection — identifying vehicles, classifying them by type, tracking them across the frame, and flagging behavioral patterns (rapid departure from a compound, convoy formation, approach to a pre-designated avenue) that analysts had configured as tipping events. The analyst did not watch raw video. The analyst watched a Maven-processed feed with annotations and triaged alerts.

This is not science fiction. It is not autonomous targeting. It is, to be precise, a very efficient assistive layer that sits between the sensor and the human decision-maker. But the operational leverage it provides in a high-tempo seventy-two-hour operation is enormous. Without Maven, a fusion cell watching four simultaneous ISR feeds would need at least four analysts doing nothing but staring at screens, and the effective coverage rate — the percentage of event-of-interest frames actually seen by a human — would degrade sharply after the first few hours of sustained operations. With Maven, the same cell can run eight or twelve feeds through an automated first pass and reserve human attention for the frames that matter.

"Maven doesn't make the decision. Maven makes sure the decision gets made. The difference between a targeting cell with Maven and one without it, at tempo, is the difference between a controlled burn and a forest fire. Same amount of heat, very different blast radius." — Former CDAO official, speaking on background to Defense One, January 2026

Palantir Gotham and MetaConstellation

If Maven is the eyes, Palantir's Gotham platform is the nervous system. Palantir has been a Pentagon vendor for more than a decade, and its Gotham product — originally built for intelligence analyst workflows in the 2000s — has been the incumbent intelligence-fusion layer across multiple combatant commands for years. In 2020, Palantir introduced a complementary product it branded MetaConstellation, aimed specifically at the problem of aggregating multi-source feeds (commercial satellite, SIGINT, HUMINT reports, Maven-processed ISR, OSINT) into what Palantir's marketing material has called "a single pane of glass" for operational decision-makers.

The single-pane-of-glass claim has been controversial, partly because it is a metaphor that flatters the software more than the underlying reality of intelligence fusion, and partly because Palantir has made it in several different forms over several different product generations. When one speaks to analysts who have actually used Gotham in an operational setting, the reviews are more mixed than the marketing. Gotham is powerful. Its link-analysis capabilities — the ability to assemble entity graphs from disparate data sources and surface relationships that would take a human analyst days to discover — are genuinely best-in-class. Its data-ingestion pipes are robust. Its user interface, while not beloved, is mature.

What Gotham has historically been less good at is velocity. The kind of deliberate, weeks-long target development for which CENTCOM targeting cells used it in the mid-2010s is not the same problem as supporting a high-tempo seventy-two-hour kinetic operation. Palantir's engineers have known this for years, and a significant portion of the MetaConstellation product development effort has gone into reducing the latency between data ingestion and analyst visibility. By the time Absolute Resolve was launched, MetaConstellation was being used at SOUTHCOM in something close to a live-operations configuration — not just a deliberate-planning tool but an active-tempo fusion layer.

The specific function that MetaConstellation appears to have performed during Absolute Resolve, based on briefings to members of Congress and subsequent reporting, was target deconfliction. That is, as multiple targeting streams (SIGINT-derived, IMINT-derived, HUMINT-derived) fed into the joint targeting cell, MetaConstellation was the layer that ensured a single physical object in the real world — a particular building, a particular vehicle, a particular individual — was not being tracked, targeted, or struck by two separate processes with inconsistent data. This is an unglamorous function. It is also the kind of function whose failure causes catastrophes: friendly fire, double-tapping, collateral damage from redundant strikes on targets already serviced. In a seventy-two-hour window with 150+ aircraft and multiple ground elements, the coordination load is precisely the kind of problem that a well-instrumented fusion platform can handle better than a room full of humans with whiteboards.

Anthropic Claude: A Limited Deployment, A Loud Controversy

The third element of the publicly acknowledged AI stack is the one that has generated by far the most public controversy in the three months since the operation: the limited deployment of Anthropic's Claude large language model for a set of analyst-assist tasks within the Department of Defense's intelligence architecture.

The contractual details matter enormously here, and they have been obscured in most of the popular coverage. Anthropic's arrangement with the DoD, which was first disclosed in 2024 and expanded in late 2025, is not a blanket military-use license. It is a specific, carved-out deployment under terms that explicitly prohibit use in what Anthropic's Acceptable Use Policy calls "targeting or engagement decisions" — that is, any process that directly generates or authorizes the application of kinetic force to a specific object or person. The permitted use cases, under the public version of the contract, are:

  1. Open-source intelligence (OSINT) synthesis — summarizing large volumes of foreign-language social media, press, and public documents.
  2. Document triage — assisting analysts in prioritizing unclassified and controlled-unclassified documents within a queue.
  3. Analyst support — drafting, summarizing, and reformulating analyst products for intelligence consumers.
  4. Red-teaming and counter-deception exercises in training environments.

What Claude was not contractually permitted to do, during Absolute Resolve or at any time, was touch a targeting workflow, recommend a strike, or generate any output that flowed into a weapons-release decision. The contractual carve-outs are genuine, they are technically enforced at the deployment layer (via specific API surface restrictions and a classified version of Claude that is deployed in a Department of Defense enclave with auditing), and Anthropic's leadership has stated publicly that violations of those terms would trigger termination of the contract.

None of which has prevented the controversy.

The reason is that, as multiple reporters including those at the Washington Post and the New York Times have pointed out, the distinction between "not in the kill chain" and "in the kill chain" becomes philosophically fuzzy in a fused intelligence environment. If Claude is summarizing foreign-language intercepts, and those summaries are read by an analyst, and that analyst is in a fusion cell that is also working on target development, is Claude in the kill chain? Anthropic's answer is that the system is auditable and the decisions are traceable, and that the question is not whether Claude's output ever reaches a human who is working on targeting but whether Claude's output is a material input to a targeting decision. That is a defensible position. It is also one that a substantial fraction of Anthropic's own engineering workforce has declined to accept.

In the first week of February 2026, an internal letter was circulated within Anthropic protesting the use of Claude "in any combat-adjacent context." The letter, which was eventually signed by approximately 250 employees — a meaningful fraction of Anthropic's technical staff, though not a majority — cited the company's published responsible-use policy and argued that the carve-outs in the DoD contract were insufficient to prevent "the normalization of AI models as infrastructure for organized violence." The letter was leaked to a reporter at The Information within a week of its internal circulation.

"We did not join this company to build tools that are deployed, under any configuration, in support of military operations that result in the deaths of foreign nationals. The technical distinction between 'targeting' and 'analyst support in a targeting environment' is, in practice, a distinction without a difference." — Excerpt from the Anthropic employee letter, as reported by The Information, February 2026

The response from Anthropic's leadership was delivered in two forms. First, an internal all-hands meeting, the transcript of which has not been leaked but whose tenor has been reported. Second, a public statement by CEO Dario Amodei, issued several days after the letter became public.

"We believe that the responsible course for a frontier AI company is neither to refuse all engagement with the defense establishment of a democratic state nor to accept any use the government may wish to make of our systems. The middle path is difficult. It requires active, contractual, technical, and moral negotiation over specific use cases. We are committed to that negotiation. We also believe — and we understand that reasonable people within our company and outside of it disagree — that withdrawal from this conversation would leave the field to actors whose commitment to the responsible use of AI is significantly less stringent than ours." — Dario Amodei, public statement, February 2026

It is worth noting the contrast with the public response from OpenAI's Sam Altman, whose company has pursued a different but parallel path of military engagement since the quiet removal of the blanket "no military use" language from OpenAI's usage policies in early 2024. Altman's statement, delivered in a podcast interview rather than a press release, was notably more bullish in framing:

"The question isn't whether AI is going to be used in defense. That question was settled a long time ago, probably by the second week of ChatGPT's existence. The question is which companies are going to be in the room when the rules get written, and whether those rules are going to be shaped by people who care about the long-run trajectory of this technology or by people who don't. I know which room I want to be in." — Sam Altman, podcast interview, February 2026

Scale AI, Anduril, and the Tail of the Stack

Beyond the three named vendors, there is a longer tail of companies whose involvement in Absolute Resolve is either publicly acknowledged in more limited form or inferred from industry structure. Scale AI has been the dominant data-labeling pipeline vendor for Project Maven for several years, and while Scale does not itself operate in the kill chain in any direct sense, the Maven computer vision models that processed ISR video during Absolute Resolve were almost certainly trained — at least in part — on data that Scale's human labeling workforce had curated. Anduril, the defense-native hardware and software company that has grown from a Palmer Luckey side project into one of the fastest-expanding Pentagon vendors of the 2020s, is a significant player in the autonomous systems space and is rumored to have provided tactical edge inference capabilities to ISR platforms during the operation, though this has not been confirmed. Shield AI and Helsing — the former American, the latter European — occupy adjacent territory, though only Shield AI has the kind of existing American contractual relationships that would make its involvement in Absolute Resolve plausible.

The question of edge inference — the ability to run AI models on the aircraft, the drone, the forward-deployed node, rather than streaming data back to a data center — is one of the most technically interesting questions about the future of the stack, and one of the least well-documented in the public record of Absolute Resolve. What is clear is that the latency budget of a seventy-two-hour operation does not tolerate a CONUS round-trip for every inference call. What is less clear is precisely how much of the Maven processing, the Gotham queries, or the Claude analyst-assist was happening in-theater versus at reach-back facilities in Tampa, Langley, and Maryland. The likely answer is "most of it was reach-back, a growing fraction was edge, and the balance is shifting."

What "AI in the Kill Chain" Actually Means

One of the most common errors in public discussion of military AI is to collapse the targeting cycle into a single black box. "AI in the kill chain" is sometimes used to mean "the computer decides who dies." That is not what it means in any publicly documented American military application as of the spring of 2026, and it was not what it meant in Absolute Resolve. Let us be precise.

The targeting cycle — in American joint doctrine, the "kill chain," sometimes articulated as Find, Fix, Track, Target, Engage, Assess (F2T2EA), and in other articulations as the OODA loop — is a layered process. Each layer is a distinct technical and organizational problem. AI tools sit at specific layers, and their authority and autonomy varies by layer.

Kill-chain layerFunctionRole of AI in 2026Human authority
SensingCollecting raw data from sensors (EO/IR, SIGINT, radar, OSINT)Sensor scheduling, automated collection taskingHuman confirms collection priorities
ProcessingConverting raw sensor data into structured signalsHeavy (Maven CV, automated SIGINT triage, LLM OSINT synthesis)Human reviews outputs
FusionCorrelating signals across sources to build a common operating pictureHeavy (Palantir Gotham/MetaConstellation entity resolution)Human interprets fused picture
Decision supportGenerating options, courses of action, target nominationsModerate, growing (automated target nomination, COA generation)Human selects options
AuthorizationApproving a specific engagement against a specific targetNone permitted under current policyHuman alone, at appropriate command level
Weapon releasePhysical application of kinetic effectNone permitted; weapons release requires human actionHuman alone
AssessmentDetermining whether the intended effect was achievedGrowing (automated BDA from post-strike imagery)Human confirms assessment

The policy that governs this distribution of authority, most recently restated and updated in the DoD's Directive 3000.09 on Autonomy in Weapon Systems and in the CDAO's responsible AI guidelines, is that humans remain on every weapon release decision. This is sometimes described as "a human in the loop" or, more precisely in contemporary doctrinal language, "appropriate levels of human judgment over the use of force." The directive does not require that AI be absent from every layer. It requires that for any specific engagement, an accountable human authority — at the appropriate level of the command hierarchy — makes and can be held responsible for the decision to engage.

That policy held during Absolute Resolve. That is the DoD's position and it is, as far as the public record allows us to assess, accurate. No weapon was released on the basis of an AI-generated decision in the absence of a human authorization at the appropriate command level. The F-35B pilot over the Caracas basin was the one who pulled the trigger. The JTAC on the ground was the one who called the strike. The TST (time-sensitive targeting) cell officer was the one who approved the nomination. AI did not decide. AI shaped the space within which the decision was made.

Whether that distinction is adequate — whether the shaping of the decision space by automated tools is a meaningfully different thing from the automation of the decision itself — is the ethical question that the Anthropic employee letter, and a great deal of serious academic work in the last several years, is genuinely and not rhetorically asking. We will not resolve it here. But we can be clear about what the technical situation actually is, because most of the public conversation is not.

"The question of whether the human is 'in the loop' gets easier to answer and harder to mean anything by, the faster the loop goes. If the loop is a week long, the human is thinking. If the loop is seven minutes long, the human is ratifying a recommendation that the system has already effectively made. We have not crossed that line in U.S. operations. We are close enough to it that we should be talking about where exactly the line is." — Paul Scharre, Center for a New American Security, speaking to the War on the Rocks podcast, February 2026

The DoD–Anthropic Conflict in Historical Context

The Anthropic employee letter of February 2026 is not an isolated event. It is the most recent entry in a roughly decade-long series of internal revolts within civilian technology companies over defense contracts. Understanding where it fits in that sequence is essential to assessing what it does and does not signal about the future of AI lab–Pentagon relations.

The canonical prior event is the 2018 Google Maven walkout. In April 2018, approximately 3,000 Google employees signed a letter protesting the company's participation in Project Maven. Within two months, senior leadership announced that Google would not renew the Maven contract when it expired in 2019. Sundar Pichai subsequently published a set of AI principles that included an explicit commitment not to develop "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."

The Google walkout was, at the time, interpreted in two very different ways. The first interpretation, popular in Silicon Valley, was that it marked a new norm: civilian tech companies could and would draw bright lines against military work, and the Pentagon would have to either adapt to those lines or build its own parallel technology base. The second interpretation, popular in Washington and within the defense industrial base, was more cynical: Google had lost a contract, the Pentagon had found other vendors within months, and the norm would not generalize because the economic logic did not favor it.

The second interpretation has proven, over the intervening eight years, to be more accurate. Between 2018 and 2026, several things happened that changed the industry posture:

First, the amount of money flowing from the Pentagon into AI-relevant contracting increased by more than an order of magnitude. The 2018 Maven contract was valued at roughly $9 million per year. The 2026 Pentagon AI-related budget, as we will detail in the next section, is approximately $13.4 billion. At that scale, the question is no longer whether to participate but on what terms.

Second, the geopolitical environment changed. The Russian invasion of Ukraine in 2022 made the argument for Western technological advantage in defense harder to dismiss as militaristic. The subsequent proliferation of Chinese AI capabilities, particularly in the wake of the 2023–2024 frontier model releases from Chinese labs, made the argument for American technological advantage, specifically, harder to dismiss as jingoistic. The 2024 Israeli use of AI-assisted targeting in Gaza operations changed the moral conversation again, this time in a direction that was uncomfortable for everyone.

Third, the labor market for AI talent tightened, shifted, and then tightened again. The frontier AI labs — OpenAI, Anthropic, Google DeepMind, xAI — became the most sought-after employers of elite machine learning talent in history, and the cost structure of retaining that talent made every revenue stream consequential. Defense revenue, which had once been a rounding error for these companies, became a material contributor to gross margins.

Fourth, and perhaps most importantly, the policy frameworks evolved. In January 2024, OpenAI quietly removed from its usage policies the blanket prohibition on "military and warfare" uses and replaced it with more carefully crafted language prohibiting the use of its models to "develop or use weapons" or to "injure others or destroy property." The change was widely noted. The substantive significance was that it opened the door to the kind of non-kinetic military uses — cybersecurity, logistics, analysis, training — that the Pentagon was most actively procuring.

Anthropic's posture has been more cautious and more explicit. The company has maintained, publicly, that it does not allow its models to be used in weapons-targeting or autonomous weapons contexts, and its contractual deployments with the Department of Defense have been structured around exactly those carve-outs. This caution has not, however, prevented Anthropic from becoming a Pentagon vendor. It has merely structured the relationship along specific dimensions.

The 2026 employee letter is therefore best understood not as a replay of the 2018 Google walkout but as something different and, in some ways, more consequential. The 2018 letter was written by employees who believed that a bright line was possible and that their company could credibly stand behind it. The 2026 letter is written by employees who believe that their company has crossed a line that it had promised not to cross, and the question is not whether to participate in defense work at all but whether the specific carve-outs of the specific contract are adequate to the specific ethical commitments the company has made.

This is a significant shift. It acknowledges, implicitly, that the 2018 posture is no longer available. The question is not "will frontier AI labs work with the Pentagon?" The question is "on what terms?" And the answer to that question is now being negotiated, contract by contract, use case by use case, in a conversation that has moved from Silicon Valley op-eds to classified conference rooms in Arlington.

"The 2018 Google moment told the Pentagon it had a problem. The 2026 Anthropic moment tells us the problem has been solved, in the sense that the solution has been defined. The solution is not 'don't do defense work.' The solution is 'do defense work with carve-outs that your engineering workforce will accept and that your general counsel can defend.' Everyone now knows that's the solution. The hard part is that those carve-outs are a negotiation, and the negotiation never ends." — Senior defense industry analyst, speaking on background to Breaking Defense, March 2026

The $13.4 Billion Question: Pentagon FY26 AI Budget

The Department of Defense's fiscal year 2026 budget request, as submitted to Congress in March 2025 and subsequently enacted with modifications, includes approximately $13.4 billion in AI-related funding lines. The precise figure varies depending on how one counts — whether one includes only the explicit AI program lines or whether one includes the AI-enabled portions of larger programs like JADC2 (Joint All-Domain Command and Control) and Replicator. The figures below use the broader counting, which is consistent with how the CDAO itself reports the portfolio.

The distribution of that budget across program lines tells a story about where the Pentagon thinks AI capability is going to come from and what it expects the money to buy.

Program lineFY26 allocation (approximate)FunctionPrimary vendors
CDAO operating budget and central AI services$1.8 billionAI infrastructure, policy, cross-cutting services, JAIC successorMultiple; in-house
Project Maven / AWCFT sustainment and expansion$1.2 billionFull-motion video CV, ISR processingPalantir, Scale AI, ECS Federal, Clarifai, others
Replicator (autonomous systems at scale)$1.0 billionMass production of attritable autonomous platformsAnduril, Shield AI, Saildrone, multiple
JADC2 AI components$2.4 billionCross-domain command and control, fusionPalantir, Lockheed, Northrop, Anduril
AIP-equivalent and LLM procurement$900 millionAnalyst assist, document triage, planning supportPalantir (AIP), Anthropic, OpenAI, Scale AI
Autonomy in weapons systems (loitering munitions, etc.)$1.6 billionLong-range kill chain autonomyAnduril, AeroVironment, others
AI-enabled logistics and sustainment$1.1 billionPredictive maintenance, supply chain optimizationPalantir, IBM, others
Cyber AI / defensive AI$1.4 billionAutonomous cyber defense, threat huntingMultiple
Research and S&T (DARPA, service labs)$1.3 billionLong-horizon capability developmentAcademic, startups, primes
Workforce, training, and miscellaneous$700 millionTalent, education, integrationMultiple

Several observations are worth drawing from this distribution.

First, the money is concentrated. While the explicit vendor list runs into the hundreds, the effective allocation is dominated by a small number of companies. Palantir is a material participant in at least five of the nine substantive lines — CDAO services, Maven, JADC2, LLM procurement, and logistics. Anduril is a material participant in three. Anthropic and OpenAI are in one each (LLM procurement). Scale AI is in at least two. This concentration is not because the Pentagon has chosen winners in the antitrust sense. It is because the set of companies capable of operating at the required combination of technical sophistication, security clearance posture, contracting compliance, and delivery velocity is small, and it is smaller than the list of companies capable of just one of those things.

Second, the acquisition pathways are unconventional. A large fraction of the AI portfolio is being executed through Other Transaction Authorities (OTAs), Commercial Solutions Openings (CSOs), and the various rapid-procurement pathways that Congress has opened up over the past decade specifically to accelerate defense technology adoption. These pathways bypass much of the traditional Federal Acquisition Regulation apparatus and allow the Pentagon to contract with commercial companies on something closer to commercial timelines. This has been, and continues to be, both a strategic advantage and a political vulnerability — strategic because it allows the Pentagon to actually move at the speed of technology, and politically vulnerable because it concentrates spending authority in program offices that are lightly staffed, heavily pressured, and not always well-audited.

Third, the budget structure implies a theory of where capability comes from. The large allocations to JADC2 and Maven reflect a bet that the highest-leverage AI investments are not in building new foundation models — those are being built for free, from the Pentagon's perspective, by the commercial AI sector — but in integrating those models into command and control architectures that can actually execute the multi-domain operations that American doctrine now calls for. The Pentagon does not need to build its own GPT-5 equivalent. It needs to be able to plug GPT-5, Claude, or whatever comes next into a command architecture that is itself the long-pole capability.

Fourth, and perhaps most importantly for the question of civilian AI labs, the LLM procurement line of approximately $900 million is significantly smaller than the total AI portfolio but represents the single largest concentrated opportunity for the frontier labs. That $900 million is split across Palantir's AIP product, Anthropic's DoD deployment, OpenAI's government offerings, and a few smaller players. For Anthropic, in particular, that line is large enough to be strategically material — plausibly in the range of 5 to 10 percent of the company's projected 2026 revenue, depending on how one counts — and small enough that the company cannot treat defense revenue as the core of its business. That combination of materiality and marginality is precisely the position that makes the internal politics of the Anthropic employee letter so difficult. The contract is big enough to matter and small enough that reasonable people can argue it is not worth the cost.

Civilian Dual-Use Companies: Four Strategic Paths

For any civilian AI lab or data infrastructure company of meaningful scale, the question of how to relate to defense spending is no longer theoretical. It is a strategic decision that will define the company's brand, its talent pool, its capital structure, and in some cases its long-run viability. Four distinct paths have emerged in the 2024–2026 period, each with its own economic logic, political posture, and institutional costs.

PathArchetypeDefense postureCommercial brandTalent implication
Defense-nativeAnduril, Shield AIUnapologetic primary missionNot relevant; no consumer exposureSelf-selected talent pool; no friction
Commercial+defense integratedPalantirDual track, integrated brand"We work with the West, loudly"Some friction; mostly filtered at hiring
Commercial-led with constrained defenseAnthropic, OpenAI (partial)Structured carve-outs"Responsible; engaged, not captured"High friction; letters, internal debates
Sovereignty-first EuropeanMistral, Helsing (partial)Ambiguous, nationally flavored"European alternative"Political sensitivity; depends on capital

Let us take each of these in turn.

The Anduril Path

The defense-native path is the simplest in terms of internal politics and the most demanding in terms of external positioning. A company like Anduril, founded explicitly to be a defense technology firm, never had to negotiate with a commercial user base that might object to its primary mission. Its talent pool self-selects. Its brand is, in a meaningful sense, its mission. The cost of this path is that the total addressable market is constrained to what the defense customer, and a small set of adjacent government customers, will pay for. The reward is that the internal friction is close to zero. No one at Anduril writes a letter protesting the company's participation in defense work because the entire point of working at Anduril is to participate in defense work.

This path has become substantially more economically viable in the 2020s than it was in the 2000s, for two reasons. First, the amount of defense spending flowing to non-traditional vendors has grown enormously. Second, the willingness of high-quality technical talent to work on defense problems has also grown, partly because the political environment has shifted and partly because the absolute compensation available at defense-native startups has become competitive with frontier commercial compensation.

The Palantir Path

The commercial-plus-defense-integrated path is what Palantir has spent twenty years building. Palantir's distinctive feature is that it does not treat its defense work as a separate business unit that needs to be isolated from its commercial brand. It treats the defense work as constitutive of what the company is, and it makes its commercial customers — in finance, healthcare, manufacturing — aware of that. Palantir's leadership, particularly its CEO Alex Karp, has been notably willing to make loud public arguments for the moral legitimacy of Western defense technology work, and to frame Palantir's participation in that work as a feature rather than a liability.

The economic logic of this path is that it allows the company to harvest both revenue streams without having to structurally separate them, and it allows the defense work to serve as a proof-of-capability reference for the commercial work (if the company can handle classified intelligence fusion at SOUTHCOM, it can handle enterprise data fusion at a multinational bank). The cost of this path is that it filters both the talent pool and the commercial customer base. Some technical talent will not join Palantir because of the defense work. Some commercial customers will not buy from Palantir for the same reason. Palantir's bet is that the talent and customers who remain are a better fit for the company than the ones who are filtered out, and that the filtering itself is a source of internal coherence.

The Anthropic Path

The commercial-led path with constrained defense engagement is the hardest of the four to execute, and it is the path that Anthropic has chosen. The strategic bet underlying this path is that Anthropic can be a frontier commercial AI company — whose core revenue comes from enterprise and API customers, whose brand is oriented around safety and responsibility, and whose talent pool is drawn from the most sought-after segment of the AI research community — while also maintaining a structured, carved-out relationship with the Department of Defense that generates revenue, maintains institutional relationships, and allows the company to be "in the room" when policy is being shaped.

The difficulty of this path is that the carve-outs have to be real, they have to be technically enforceable, they have to be contractually durable, and they have to be explicable to an engineering workforce whose moral commitments are part of why they joined the company in the first place. When any of those conditions fails — when a carve-out turns out to be porous, or when a contract renegotiation pushes the terms in an uncomfortable direction, or when an operation like Absolute Resolve brings the underlying reality of military engagement into public view — the internal politics become very difficult very quickly.

The February 2026 employee letter is the most visible manifestation of that difficulty, but it is not the first and will not be the last. A company on the Anthropic path has to treat the negotiation between its commercial mission and its defense engagement as an ongoing, never-completed process rather than as a one-time policy decision. Every new contract, every new use case, every new operation is another iteration of the negotiation.

The Mistral / European Path

The sovereignty-first European path is the most politically interesting of the four and the least well-defined. Mistral, the French frontier AI lab that has positioned itself as the European alternative to the American labs, has been notably careful about its defense posture. It has not explicitly refused defense work — that would be incompatible with the French government's interest in European strategic autonomy — but it has also not publicly embraced defense contracts in the way that Palantir or Anthropic have.

Helsing, the German defense AI firm founded by former Palantir engineers among others, is closer to the Anduril archetype in its core positioning, but its European context changes its politics significantly. European talent pools are generally more skeptical of defense work than American ones, European public opinion is generally more skeptical of military applications of new technology, and the European regulatory environment — particularly the EU AI Act — imposes constraints that do not exist on the American side.

What all of the European players have in common is that they are operating in a defense spending environment that is, for the first time in a generation, genuinely growing. The post-Ukraine European commitment to increase defense spending to and beyond 2 percent of GDP has meaningfully expanded the addressable market for European defense technology firms. Whether the European AI labs can convert that expanded spending into a viable European defense AI stack is one of the open strategic questions of the second half of the 2020s.

Lessons for European Industry and Sovereignty

If Operation Absolute Resolve demonstrated anything about the structural position of European defense technology, it is that Europe is both structurally late and strategically essential. The aircraft over Caracas were American. The AI stack supporting them was American. The intelligence fusion layer was American. The data labeling pipelines were American. The LLM analyst assist was American. There was no European equivalent to Palantir in the tent, no European equivalent to Anthropic in the analyst workflow, no European equivalent to Anduril in the autonomous systems layer. Helsing, the most credible European candidate, was not — and would not have been — invited to an American combatant command's joint targeting cell for an operation of Absolute Resolve's sensitivity.

This is not a failure of European technology. It is a consequence of thirty years of European defense underinvestment, the absence of a unified European defense procurement architecture, and the historical reality that the American defense industrial base operates at a scale no individual European country can match. The question for Europe in 2026 is not whether it can compete with the American defense AI stack on the American's terms. It cannot. The question is whether it can build a European defense AI stack that is adequate to European strategic needs on European terms.

Several things make this more possible in 2026 than it was in 2016.

First, the political acceptance question has shifted. Before 2022, European publics and European technical talent pools were both significantly more hostile to defense work than their American counterparts. After Ukraine, and particularly after the sustained Russian pressure on Baltic and Nordic states that characterized 2023–2025, that hostility has softened. It has not disappeared, but it has softened enough that credible European defense AI firms can now hire at scales that would have been impossible a decade ago.

Second, the capital is available. The EU Defense Fund, the European Peace Facility, the national defense budgets of France, Germany, Poland, the United Kingdom (outside the EU but still part of the European industrial ecosystem), and the Nordic states have all expanded. The total pool of European defense spending is now substantial enough to support a meaningful European defense technology sector, provided the procurement mechanisms can actually direct that spending toward non-traditional vendors.

Third, the European Defense Agency and the European Commission have both, in the 2024–2026 period, made defense AI a stated priority. Whether the stated priority translates into actual procurement is a separate question, but the political architecture is at least nominally in place.

Fourth, the sovereignty argument has become more persuasive. The American AI stack is excellent. It is also American, and that carries with it a set of dependencies — political, technical, and sometimes contractual — that European governments are increasingly uncomfortable accepting for the most sensitive capabilities. A European government that wants to run an autonomous ISR operation over European territory is, in 2026, genuinely ambivalent about doing so on an American cloud running American models interpreted by American-designed fusion layers. The sovereignty argument does not mean Europe will build its own frontier LLM. It may mean that Europe will build its own sovereign fusion and targeting layer that can consume American models as commodities while keeping the decision-supporting architecture in European hands.

The most interesting European play of the next three years is probably not going to be a European attempt to match Anthropic or OpenAI at the frontier. It is going to be a European attempt to match Palantir at the fusion layer, with Helsing and one or two other firms as the leading candidates. Whether that attempt succeeds depends on whether Europe can do something it has historically struggled to do: make a large, concentrated, cross-border procurement decision and stick to it against the gravitational pull of national industrial champions.

Geopolitical Aftermath: Venezuela in April 2026

Three months after the operation, Venezuela is in a state that most observers are reluctant to characterize with any single term. There is an interim authority, headed by a figure whose political legitimacy is still being contested inside the country and outside it. There is a substantial American military and civil-affairs presence, concentrated around Caracas, Maracaibo, and the oil infrastructure of the eastern Orinoco basin. There is a functioning but stressed oil-export system that has begun to move crude at volumes higher than the immediately preceding years, with most of that crude flowing to the United States and to a short list of Asian buyers. There is a humanitarian situation that, depending on which agencies one believes, is either stabilizing slowly or deteriorating slowly.

The regional reaction has been more restrained than many observers expected in the immediate aftermath of the operation. Brazil, under its current government, issued a formal protest at the OAS and a more muted set of private communications to Washington. Colombia, whose government is in a complicated position given its own internal security dynamics and its long border with Venezuela, has cooperated at the operational level with American and interim-authority forces while publicly criticizing the unilateral character of the intervention. Cuba has responded with the sort of rhetoric one would expect and with a very careful set of practical steps designed to avoid provoking any American follow-on action against Cuban interests. Mexico has denounced the operation in strong terms at multiple fora and has refused to recognize the interim authority.

The Chinese and Russian responses have been principally informational. Both governments have invested significantly in narratives framing Absolute Resolve as a reckless American adventure, an illegal intervention, and a demonstration of American willingness to use force against Latin American sovereignty. These narratives have found meaningful traction in some parts of the Global South but have not, so far, produced any practical coalition against the American position. The Russian narrative has been particularly focused on the use of AI in the operation, attempting to frame it as evidence of an emerging "algorithmic imperialism." Whether this framing gains durable traction is one of the open information-warfare questions of 2026.

The OAS posture has been uncharacteristically active. A formal investigation into the conduct of the operation, particularly into civilian casualty figures, was authorized in late February. The investigation is proceeding slowly, is constrained by American refusal to release classified information, and is unlikely to produce conclusions that materially affect the political outcome. It may, however, produce documentation that becomes the basis for later legal proceedings in European or international forums.

For the purposes of this article, the most important geopolitical consequence of Absolute Resolve is the signal it sends about the feasibility of AI-enabled expeditionary operations. Before January 3, 2026, the prospect of a seventy-two-hour operation to remove a hostile regime in the Western Hemisphere would have been assessed by most serious analysts as politically conceivable but operationally extremely difficult. The operational difficulty would have been the planning tempo, the coordination load, and the targeting fidelity required to conduct the operation without unacceptable civilian casualties and without tipping into a prolonged counterinsurgency. After January 5, the assessment has shifted. The operation happened. The tempo was achieved. The coordination load was managed. The targeting fidelity was — within the limits of publicly available information — high enough that the operation did not produce the kind of catastrophic civilian casualty numbers that would have delegitimized it in the court of Western public opinion.

That changes the feasibility assessment for every similar operation in the queue. It does not mean that any such operation is imminent, or even that any such operation is under serious consideration at the policy level. But it means that when the policy level does consider such operations, the assumption will no longer be that the operational difficulty is prohibitive. The assumption will be that the operational difficulty is manageable, given the right AI-enabled planning and execution stack. That is a meaningfully different strategic environment.

Conclusion: The Compression Layer

Let us return to the thesis. AI did not win Operation Absolute Resolve. American tankers won it. The F-35Bs off the Wasp won it. The Marine Raiders and the SOCOM elements who entered the safehouse outside Caracas at 19:40 EST on January 4 won it. The HUMINT network, which by the nature of such operations will not be publicly acknowledged for a long time, won it. The weeks of quiet repositioning of forces that preceded the operation won it. The weather, the element of surprise, and a long list of tactical decisions by officers whose names we will mostly never know won it.

What AI did was compress the planning cycle by an order of magnitude. What would have taken seven to ten days of joint targeting cycle work took under seventy-two hours. What would have required a fusion cell of twenty-five analysts staring at screens required a fusion cell of six analysts supervising Maven-processed feeds on a MetaConstellation pane. What would have required weeks of deliberate target development was compressed into hours of assisted nomination, deconfliction, and approval. The compression did not come from a single magical AI tool. It came from the cumulative, integrated effect of a stack — Maven at the sensing and processing layer, Palantir at the fusion layer, Claude at the analyst-assist layer, Scale AI's pipelines invisibly in the background, and a long tail of less-public tools doing more specific work.

That compression is the supplier moat. Let us be precise about what that means.

In the history of defense technology, the most valuable position to occupy is not the one that builds the weapon. Lockheed Martin builds the F-35, and it is enormously profitable, but the F-35 program exists within a political and strategic envelope that is determined by forces outside Lockheed's control. The most valuable position is the one that the customer cannot do without, that cannot easily be replaced by another vendor, and that becomes structurally embedded in the way the customer operates. For most of the twentieth century, that position was occupied by a handful of prime contractors who had become structurally embedded in the American defense procurement system. For the first decade of the twenty-first, it was occupied partially by Palantir, whose intelligence fusion work became embedded in the workflows of the intelligence community and could not easily be ripped out. In the late 2020s, the equivalent position is going to be occupied by whoever owns the compression layer of the kill chain.

The compression layer is not a product. It is not even a platform. It is the set of tools and integrations that allow a targeting cycle to run in hours instead of weeks. It includes computer vision. It includes intelligence fusion. It includes analyst-assist LLMs. It includes data labeling pipelines. It includes autonomous edge inference. And it includes the institutional knowledge of how to integrate those elements into the actual workflows of actual combatant commands, which is a form of tacit knowledge that cannot be replicated by a competitor on a six-month timeline.

The companies that currently occupy positions in that layer are Palantir, Anduril, Scale AI, and, in a more constrained way, Anthropic and OpenAI. The Pentagon's $13.4 billion AI budget is, in effect, the price the American government is paying to have that layer exist, evolve, and remain under American control. The internal conflicts at Anthropic, the walkouts and letters and resignations, are the price the companies in that layer are paying for the privilege of being there. Those conflicts are not going to resolve. They are the permanent internal politics of the commercial-led defense engagement path, and any AI lab that chooses that path should expect to live with them for as long as the path exists.

The moral question does not go away. The question of whether the compression of the OODA loop from weeks to hours is a good thing — whether it makes war more winnable in ways that make it more thinkable, whether it erodes the deliberative space in which a democracy is supposed to decide to use force, whether it concentrates decision authority in a way that undermines human judgment — is a question that will be with us for the foreseeable future. It is not a question that the engineers of Palantir or Anthropic or Scale AI can answer by themselves, and it is not a question that the Pentagon can answer by itself either. It is a question that is going to be fought out in employee letters, in classified review boards, in congressional hearings, in academic journals, in election campaigns, and in the court of international opinion for at least the next decade.

The honest version of the lesson from Absolute Resolve is this: the question of whether AI belongs in war is no longer a live question. AI is in war. The live questions are (1) which vendors will own the compression layer, (2) what internal mechanisms those vendors will use to manage the ethical and legal constraints they have accepted, (3) whether European and other allied industries can build a parallel stack that preserves meaningful sovereignty, (4) whether the policy architecture of human-on-the-loop targeting can be maintained as the tempo accelerates, and (5) whether the moral cost of the compression will be paid, as it has been in every prior generation of military technology, in the form of civilian deaths and political regret that accumulate slowly and become clear only in retrospect.

Operation Absolute Resolve ended three months ago. Its operational lessons are being studied. Its political aftermath is still unfolding. Its supplier consequences will be with us for at least a decade. Its moral consequences will be with us for longer.

The analysts will read this article and ask whether the thesis is correct. They will ask whether the compression really was the decisive thing, or whether the HUMINT and the tanker bridge and the element of surprise would have sufficed without it. That is a fair question, and the honest answer is that the counterfactual is not available for inspection. The operation happened with the AI stack. We do not know what it would have looked like without it, and we do not know whether it would have been attempted without it. We can say, with reasonable confidence, that the planners who approved it did so under the assumption that the AI stack would make the compression possible, and that without that assumption, the operational plan would have been different — longer, more cautious, more conventional, and possibly not attempted at all in the particular political window during which it was attempted.

That is the sense in which AI changed the calculus. Not by winning the operation, but by making the operation a plausible thing to try in the first place. And that kind of change — the change in what is thinkable, in what counts as feasible, in what the planning staff brings to the policy principal as a viable option — is the most consequential kind of change a technology can make in the conduct of war. It is not the kind of change that fits on a weapons system data sheet. It is the kind of change that fits in the agenda of a deputies' committee meeting, in the pre-brief slide deck, in the risk assessment memo that the chairman signs before the Secretary sees it. It is the kind of change that redefines the menu of options that a democracy's elected leaders can even see.

That is where we are in April of 2026. The menu has changed. It will not change back. The question for the next decade is what we do with a menu that now includes, as a routine item, the capability to compress the kill chain from weeks to hours. That question is not an AI question. It is a political question, a strategic question, and, in the end, a question of what kind of military and what kind of foreign policy a constitutional democracy can responsibly sustain when its tools no longer impose the friction that once served, incidentally but importantly, as a form of deliberation.

The friction is gone. The deliberation must be found somewhere else. Whether we find it — in law, in policy, in the internal culture of our suppliers, in the ethics of our officers, or in the judgment of our elected leaders — is the open question that Operation Absolute Resolve has placed, with some urgency, on the table.

Sources and References

The reconstruction and analysis in this article draw on the following open sources. URLs are not provided because the specific story slugs would be fabricated; readers are directed to the publications' own archives.

  • Department of Defense and SOUTHCOM public affairs releases, January 3–6, 2026, and subsequent background briefings.
  • Reuters wire reporting on Operation Absolute Resolve, January 3, 2026 through March 2026.
  • Associated Press coverage of the operation and its aftermath.
  • Defense One reporting on the CDAO, Project Maven sustainment, and FY2026 AI budget lines, including background interviews with former officials.
  • Breaking Defense analysis of FY2026 Pentagon AI budget allocations and acquisition pathways.
  • Aviation Week coverage of the air component of the operation and the tanker bridge.
  • The Washington Post reporting on the Anthropic DoD contract terms and internal dissent.
  • The New York Times coverage of the Anthropic employee letter and broader AI industry reaction.
  • The Information on the leak of the Anthropic employee letter and subsequent internal communications.
  • The Wall Street Journal on Palantir's financial disclosures and defense revenue trajectory.
  • War on the Rocks podcast and essay archive, particularly contributions from Paul Scharre and other CNAS scholars on autonomy in weapons systems.
  • Center for a New American Security publications on the OODA loop, human-machine teaming, and DoDD 3000.09.
  • CSIS reporting on the FY2026 defense budget and Western Hemisphere policy.
  • RAND Corporation publications on algorithmic warfare and the targeting cycle.
  • Anthropic's published Acceptable Use Policy and responsible scaling policy, versions current as of early 2026.
  • OpenAI's usage policies, particularly the January 2024 revisions.
  • Palantir public filings and investor communications.
  • European Defence Agency publications on EU defense AI posture.
  • Congressional Research Service reports on DoD AI policy and acquisition reform.
  • Academic literature on computer vision applications in ISR, particularly publications associated with the Maven technical literature.
  • Statements made on the record by Dario Amodei, Sam Altman, Alex Karp, and Palmer Luckey in podcasts, interviews, and public appearances during February and March 2026.
  • Background interviews with current and former DoD officials, defense industry analysts, and AI lab employees, conducted for this article on the condition that specific attributions be restricted to role rather than name.
ShareLinkedInXEmail

Stay informed

Get notified when we publish new insights on strategy, AI, and execution.

MR
Moussa Rahmouni

Strategy & Program Manager — Founder of Stratelya & InekIA

LinkedIn →
View Profile →

Related Insights

geopolitics

The European Defense Industrial Base: Sovereignty, Scale, and the AI Gap

Europe is rearming. But spending more is not enough — the question is whether Europe can close the AI gap in defense technology before structural dependency on

← All InsightsBook a Diagnostic