Photo-Illustration: New York Magazine; Photos Getty Images
On a Thursday in April, I headed to the north end of Yale’s campus and persuaded a passing fellow student to swipe me into a building. Fourteen floors up, the Palantir Foundation — a think tank run by several of the software company’s senior executive team — was about to convene its third annual Atlantic and Pacific Forum, co-hosted with Yale’s Jackson School of Global Affairs. The event hadn’t been publicized online, and I saw no signage for it in the building’s lobby. Only select Yale students had been invited. (I RSVP’d using a plain-text digital poster that had been floating around but not posted in public.)
It was sensible enough to keep the conference somewhat hidden. Palantir has become a byword for America’s growing surveillance state. The company sells something seemingly mundane: software that helps clients analyze and sort data. But in the hands of the firm’s highest-profile clients, you can see the consequences of good organization. Palantir’s Maven Smart System helps armies decide who to kill; the firm’s technologies aid ICE in combing through diverse sources — from Medicaid data to seized smartphones — to select deportation targets.
And then there is Alex Karp, Palantir’s CEO, who has leaned into Palantir’s notoriety. “The only way” to save the West and stop war, he told Fast Company, is to “scare the daylights out of our adversaries.” During a talk last year, Karp fantasized about deploying a drone to spray “fentanyl-laced urine” on analysts who had criticized the firm’s valuation; the company recently sold T-shirts reading “There Are No Secrets.”
Palantir’s reputation is, accordingly, poor on campuses like Yale’s. The university’s student government voted last month to demand that the university divest from the company. But the Atlantic and Pacific Forum — part intellectual salon, part job fair — offered a gathering of converts to Karp’s cause and a call for new adherents. When the elevator doors opened, I saw the room was rapidly filling.
The crowd was a mix of State Department staffers, Yale professors, Palantir employees (including Bill Rivers, a deputy on the Corporate Affairs team), and think-tankers. I spotted two people affiliated with the Council on Foreign Relations among them: Gordon M. Goldstein, who arrived carrying a CFR tote bag, and Alan Raul, who wore an American-flag lapel pin.
A small clutch of undergraduates stood near the windows. They were students, they told me, of Ted Wittenstein, a Yale lecturer who had organized the event. One said he thought they’d been invited because the organizers knew they were “not going to protest.” Some were angling for jobs at Palantir. Others were simply interested in the conference’s topic, “National Power and Purpose in the Age of AI.” I found my seat next to an employee of a large New York bank. Her boss, she said, had sent her.
Wittenstein called the room to order. “In this age of hyperpartisan politics,” he said, “we need more gatherings on campus and elsewhere that foster intellectual diversity.” It was a curious note to strike. The speakers that day were almost entirely Palantir executives, conservative intellectuals, and alumni of the intelligence community. It was a safe space, he seemed to be saying. (In a statement sent through Yale, Wittenstein said that invitations had gone to “several hundred students, scholars, and practitioners,” many of whom were on his Schmidt Program’s mailing list and that “everyone who signed up was most welcome to attend.”)
The first talk was supposed to be on “Work, Purpose, & Human Flourishing in an Age of AI” featuring Roger Kimball and R.R. Reno, the long-tenured conservative editors of The New Criterion and First Things, interviewed by Maya Sulkin of Bari Weiss’s Free Press. Sulkin attempted to press her panelists into reckoning with the political backlash to AI. How should Gen Z handle the fear that AI will render them “obsolete,” and that no amount of work or study matters because “the machine will outperform you”? Reno brushed off these concerns. In his view, this was just “doomerism” and “nihilism” dressed differently. “It was the ‘climate crisis’ only just yesterday,” he said (as if that had been solved).
Sulkin tried again: She asked her panelists to consider the “permanent underclass” scenario, the possibility that AI concentrates all wealth among “like 19 people in Silicon Valley.” Neither editor seemed very troubled by this. “A rising tide lifts all boats,” Kimball responded. Reno reached for scripture — “I mean, Jesus said, ‘The poor you shall always have with you.’” He predicted that there might eventually be “a kind of aristocracy of the intellect”: “the people who wind up feeding the new thoughts to the large language models” at the top and then everyone else would be “just consumers.” Looking around the room, there was no need to worry where folks here would end up.
Sulkin asked whether colleges in the face of AI-induced job loss should abandon career prep and return to something older: “soul formation.” Kimball took the opening. Yale had been “distracted” from education by “performative concern with so-called social justice,” he said. The humanistic enterprise, he warned, was probably “moving outside of universities right now.” Princeton Classics graduates, he added, couldn’t even read Latin. The connection between these matters and Palantir might seem tenuous. But Karp and his director of corporate affairs, Nicholas Zamiska — another conference speaker — have insisted otherwise.
In their book, The Technological Republic, they contend that Silicon Valley lost its way after the Cold War as the technology sector retreated from the public interest and into “luxury beliefs” — opposition to using software to help law enforcement among them. The rot, in their telling, began in higher ed: Stanford dropped its History of Western Civilization requirement in 1968, and the generation that built the internet grew up constructing its identity “in opposition to the state.” It became squeamish about helping governments do government things, like deporting people.
Karp and Zamiska take particular offense at Google’s former motto, “Don’t be evil.” That old maxim reflects, they write, a mind-set that prizes moral clarity over “the more difficult and often messy task of navigating the world in all of its imperfection.” Palantir would not make the same mistake.
In the Trump era, that policy has been particularly profitable. The firm’s revenue grew 93 percent last year, and Palantir became one of the 20 most valuable American companies. It landed a string of major government contracts, including $30 million from ICE last April to build ImmigrationOS, a platform to select targets for deportation; $10 billion from the Army; $1.3 billion from the Pentagon to build Maven, a drone-imagery-labeling software; and nearly $448 million from the Navy. Today, its stock price stands at more than triple its level than on the eve of the 2024 election. Earlier in April, when the stock dipped, the president went on Truth Social to praise Palantir’s “great war fighting capabilities and equipment” — and posted its ticker symbol.
Exactly how far does Palantir’s wish list — “a union of the state and the software industry,” as The Technological Republic puts it — go? The conference’s speakers ranged from highly skeptical to fully dismissive of AI regulation. During Zamiska’s talk, Wittenstein — his interviewer and his old classmate at Yale — asked Palantir’s director of corporate affairs whether there were any “red lines” where government regulation of AI might be warranted. Zamiska didn’t name any. Sure, he could “understand the anxiety that comes with this current moment.” But what he wanted instead of regulation was “a much deeper, richer, more integrated public-private partnership.”
The conference’s dedicated panel on AI regulation struck a similar tone. Dean Ball, a former Trump adviser and the lead author of the administration’s AI Action Plan, had little patience for most of the over 1,500 AI-related bills introduced in state legislatures. There was, he acknowledged, “a small subset of bills that grapple with things like catastrophic risk” that he supported. But rules against asking AI for legal advice, he said, were “stupid.” There “probably should be a national framework” for catastrophic AI-risk reporting, he said, but “the goal of AI governance should not be to solve every profound and interesting question.”
Ball, who is now a fellow at the Foundation for American Innovation, mused that AI might replace the Supreme Court, then the United States government itself. AI, he said, was “this giant acid vat” dissolving society’s mediating institutions. “Future institutions will be machinic,” he said. “It will not be AI in government. It’s going to be AI as governments.”
But was all this good? The Republic was “decaying,” Ball said, and we’re living in “very dangerous times.” He was certain, though, that the AI revolution was coming — and if America didn’t build superintelligence first, China would. He said he was a “techno-optimist and institutional pessimist.” The reason America currently leads China in AI development, he explained, had nothing to do with the innovation ecosystem or even the rule of law. “It is because we have more computers than they do, and they’re better,” he said. “That’s why.” The U.S. needed to keep it that way.
Elliot Gaiser, Ball’s co-panelist and the assistant attorney general of the Office of Legal Counsel, was more circumspect. Handing governance to machines, he said, would not strike “the sovereign people who are trying to govern ourselves in this Republic” as “particularly comforting.” But he didn’t categorically rule it out. The government official was also more practical. The Attorney General had already established a task force at the DOJ, he explained, whose mission was to find state laws “inconsistent with having a unified free-market regulatory approach to AI.” At the conference, he floated a legal theory that would give the president broad authority to build data centers wherever and whenever he liked. Using the Defense Production Act, passed by Congress in 1950, the president could allocate resources and override contracts when “necessary and appropriate for national defense.” Gaiser had already applied this logic to override state regulations blocking an oil pipeline in California, he said — establishing that a presidential executive order can preempt state law under the Supremacy Clause.
Environmental activists had worried the administration would eventually apply the same logic to force data centers into communities that had resisted them. Gaiser confirmed their suspicions. “That would apply to other forms of production,” he said, “in a certain circumstance.”
Even in this room, though, the administration’s war on Anthropic had not gone over well. Ball called the fight “counterproductive” and said he’d warned the administration not to pick it. “Anthropic has every right to set the terms on which it does business,” he added. General Timothy Haugh, former NSA director, agreed during a different panel: “It’s a step back for the department,” he said, and “the department has no mission to do surveillance in the United States.” Gaiser, Trump’s man in the room, offered nothing in the administration’s defense. He wouldn’t comment upon ongoing litigation, he said.
When we broke for a reception, I shuffled into the next room and mingled. The Yale students found the Palantirians; the Palantirians found the Trump people; everyone found the open bar. These are exactly the sorts of relationships that have made the Trump presidency so good for Palantir. I fell in with a circle of undergrads, and one asked the others what they thought of Palantir. They glanced at one another: thumbs down, mostly. A few pointed theirs sideways. “It’s not a morally great company,” one student, a freshman, told me. “I would not be comfortable with all my information being accessible to the U.S. military.”
But when the pay is good and the job is prestigious — and the alternative is unemployment — the Ivy League undergraduate’s moral calculus can shift. “A job is a job,” that freshman told me. She knew someone whose sister, a soon-to-be Ivy graduate, had struck out almost everywhere she’d applied. Palantir, in the end, was one of only two firms still returning her calls. “I’m catching on to the fact that people are struggling,” she said. And “who else is gonna help you if you can’t get a job?”
