Photo-Illustration: Intelligencer; Photo: Getty Images
People in AI love talking timelines. Usually, they’re referring to the future: the number of months or years until models are able to do x or y; the predicted point at which they can improve themselves; the beginning of an irreversible “takeoff.” As a result, conversations about AI skip erratically between what’s happening now and what will be happening soon, or what might happen eventually, often at the end of a rapidly steepening curve.
The fact that ChatGPT is both a popular consumer-facing chatbot taking market share from Google Search and made by a firm that believes in a near future “where intelligence is a utility, like electricity or water, and people buy it from us on a meter” can certainly confuse discussions about, for example, chatbot advertising, AI safety, and how AI will affect jobs. (DeepMind head Demis Hassabis recently poked a knife into the gap between pre-singularity OpenAI and Sam Altman’s description of what’s coming next: If “AGI’s around the corner,” Hassabis said, “why would you bother with ads?”) On social media, this can make for a weird discourse, where incremental software updates are used to flavor visions of apocalypse. For executives trying to decide how to invest and hire over the next year, the elastic timeline between now and soon can complicate planning and disrupt lives, and for regulators and lawmakers, the specter of rapid change can be paralyzing, even with an abundance of present harms.
Then there’s war.
Last week, after long negotiations and against the backdrop of a sudden, AI-assisted American campaign against Iran, in which Anthropic software has reportedly been “central,” the relationship between the leading AI firm and the Trump administration fully broke down. The central points of contention are Anthropic’s prohibitions on using its software for lethal autonomous warfare — strikes without a human in the loop — or the surveillance of Americans en masse. In recent months, the Pentagon has demanded Anthropic abandon these rules and others, citing, more or less, its authority to do whatever it wants and accusing Anthropic of claiming authority it doesn’t have. So far, the standoff has resulted in the termination of federal contracts and the designation of Anthropic as a supply-chain risk, an extreme measure most recently used against state-connected Chinese tech companies. (A punitive executive order has been rumored as well.) Defense secretary Pete Hegseth had reportedly dismissed Anthropic CEO Dario Amodei as arrogant and entitled, saying “No CEO is going to tell our war fighters what they can and cannot do” and suggesting that the CEO has a “god complex.” Amodei later told his employees the company was targeted for not giving “dictator-style praise to Trump” like competitors.
Communication between characters as constitutionally different as Amodei and Hegseth was always going to be strained, but the breakdown was about more than “vibes and personalities.” It was about an AI company and the Pentagon trying to account for disparate and extreme visions of tomorrow in clauses and provisions written for today. It was a collision between two movements that believe, in different and inherently incompatible ways, that they might be on the cusp of achieving absolute power, either for themselves or for the systems they’re helping to build, and that current concessions will extrapolate into total failure.
Anthropic is a company both motivated and haunted by speculative futures, and its emphasis on hypothetical uses and harms strained its relationship with the government from the start. In a lawsuit challenging the company’s designation as a supply-chain risk, Anthropic summarized the persistent beliefs that put it at odds with the Pentagon. First, it made appeals to AI’s shortcomings now. “In our view, today’s AI systems — including Claude — are not capable of reliably carrying out lethal autonomous warfare; this is why we have insisted on meaningful human oversight,” the Anthropic team writes, explaining the company’s insistence on not being used in autonomous weapons systems. “As anyone who has used a generative AI tool knows, Claude can make errors.” The company makes a similar argument about mass surveillance — “Anthropic has never tested Claude for those uses” — before outlining overwhelming capabilities its researchers believe, nonetheless, will be arriving soon: “Powerful AI could enable the government to aggregate and analyze millions of public surveillance camera feeds into real-time, population-scale tracking capabilities not contemplated or addressed by existing federal law,” the company writes. Based on its “distinctive understanding of what this technology can effectuate,” the company does “not believe it is safe or responsible for an AI developer to knowingly enable large-scale surveillance of Americans.” (Workers at Google and OpenAI have since signed a supportive amicus brief.)
For Anthropic, being able to think along the timeline between deficient and runaway Claudes is intuitive and necessary; a clear path ahead to “strong AI,” which the company argues will be self-improving and arrive within a few years, is what makes simultaneous appeals to AI deficiency and AI power both coherent and necessary. In the Anthropic team’s view, they’re responsibly planning for a fast-approaching future that they can see more clearly than others, because they’re helping to bring it about. They believe that, even under the conditions of bounded rationality, political actors will have an interest in at least hearing them out.
From the Pentagon, circa 2026, this view of the world isn’t just disagreeable or wrong — it’s incoherent, foreign, and worthy of scorn. As the Pentagon sees it, Anthropic should understand itself first and foremost as an American contractor. Also: It sounds like a cult, and it’s run by a lib.
Tech analyst and Stratechery author Ben Thompson identifies a more serious way that Anthropic’s futurism can result in conflict with governments. If, rather than mocking your concerns and demands as silly, the government takes Amodei’s timelines and warnings seriously — if it agrees that self-improving AI is imminent and presents an existential threat or merely a geopolitical one — it’s got a pretty good argument for stepping in:
Nuclear weapons meaningfully tilt the balance of power; the extent that AI is of equivalent importance is the extent to which the United States has far more interest in not only what Anthropic lets it do with its models, but also what Anthropic is allowed to do period.
Of course, Hegseth isn’t really thinking about the singularity or ways in which today’s Claude is still in ways unreliable (if AI supremacy is a core part of your national security philosophy, you don’t hobble your country’s leading lab over a procurement dispute). He’s thinking about consolidating power, and his conception of the potential of AI is subordinate to that, and him, rather than a future superintelligence. This is why the situation escalated the way it did, with the government not just walking away but attempting to punish the company for asserting itself at all. If not for this retaliation, the government’s narrow defense — that if a private company wants to contract with the military, it shouldn’t expect to be able to micromanage how its tools are deployed and should reasonably expect to be implicated directly or through reputation if, for example, the military were to then bomb a school — would make sense on its own terms. But the Trump administration’s belief in its right to total power, expressed by and in the belligerent figure of Hegseth, is central to both its “vision,” such as it is, and to understanding the way it conducts itself here and elsewhere.
In a way that’s distinct and in many ways opposed to the leaders of Anthropic, Trump administration officials are living and thinking inside of a speculative future of their own. In theirs, they have more power than was previously conceivable to most people and can accomplish things beyond the political imaginations of their predecessors; “you can just build things,” which emerged as an AI-adjacent self-help mantra in Silicon Valley, has become a rallying cry for the MAGA movement, too. Like the AI labs, the new American right has made surprising progress toward its goals since 2024 and thinks it might be able to take things a lot further — or, in any case, that it absolutely has to try. While AI figures like Amodei worry that AI capabilities could extend beyond their control, the Trump administration seems to understand AI only in terms of how it might extend its own capacity for control. Before AI labs even have a chance to lose control of their technology to themselves, they risk losing it to political actors who don’t trouble themselves with worries like that at all.
In contrast with anxious figures like Amodei, the Pete Hegseths of the world are unambivalent and accordingly see, from their current positions of power, every escalation as working in their favor. As Dean Ball, a former adviser on AI to the Trump administration, wrote in response to the Anthropic news, whether the government’s attempt to cripple the company stands up to legal scrutiny, its ambitions were clear: “The message sent to every investor and corporation in America: do business on our terms, or we will end your business.” It’s a message sent from MAGA’s speculative future — the one where it’s truly won.
Despite his confidence in certain aspects of AI development, Dario Amodei seems genuinely uncertain about what he thinks his company is manifesting and how it should be dealt with, which is one of the reasons he appeals so often to ethical frameworks and broadly formalized principles. He directs his company to think in terms of “constitutions” and “character.” Pete Hegseth’s vision of the future may prove to be mistaken in its own ways, and it’s certainly impoverished, but it has the benefit of being clear and pursuable. It suffers from no anxiety about alignment and externalities and, despite his place in the government, is burdened with far less allegiance to the structures and norms of democracy than a venture-funded start-up: It’s just militarized authoritarianism, and it, too, is characterized by recent feelings of acceleration. The overlap between the young AI boom and the rise of the MAGA movement may prove to be a disastrous coincidence of history. It’s certainly heightened and accelerated the AI industry’s reckoning with its place in the military-industrial complex, which came fast. It took 20 years for contracts with the Pentagon to trigger a crisis at Google (old motto: Don’t be evil). For Anthropic (unofficial motto: WE MUST NOT BE EVIL OR EVERYONE MIGHT DIE!), it took fewer than five.
For all its neurotic creativity, the speculative arm of Amodei’s project has failed to fully avert or adequately anticipate the first two encounters with serious AI risk: Anthropic’s subordination into an AI “arms race” against China (but also against other labs) that prevents it from slowing down, even if that might be a good idea, and the company’s impotence, in the present, against a power-hungry meathead backed by an elected administration it can only wish it weren’t stuck dealing with. Anthropic is explicitly worried about the ways that increasingly capable AI could empower autocratic governments, and its insistence on a contractual carve-out for domestic mass surveillance is intended as a form of thinking ahead to stop a future regime from doing such a thing. The problem, as we’re finding out, is that even crude, nascent authoritarians worry obsessively about the future, too, and they don’t take kindly to attempts to limit, outmaneuver, or even criticize them with present-day rules or stipulations. They simply threaten you to get back in line, or they try to break you. They don’t wait. They do it today.
