Sam Altman is claiming ethical high ground in a Pentagon deal that, by his own description, contains the very protections that prevented Anthropic from closing its own government agreement. The claim is either an accurate description of a genuinely principled deal or a masterful act of public relations — and the difference matters enormously for the future of AI ethics in government contexts.
Anthropic had been the AI industry’s most prominent advocate for constrained AI deployment in military contexts, insisting that its Claude system not be used for autonomous weapons or mass surveillance. These conditions were not last-minute demands but foundational principles that the company had articulated publicly and maintained through months of Pentagon pressure. When the administration lost patience, it acted with dramatic force, banning all federal use of Anthropic technology via presidential directive.
The public and explicit nature of the ban — announced with language that condemned Anthropic’s leadership as politically motivated — was clearly designed to shape behavior across the industry. By making the punishment visible and severe, the administration raised the cost of ethical conditions for every AI company operating in or seeking to enter the government market.
OpenAI’s Sam Altman navigated this environment by announcing a deal that he described as protective of the very values Anthropic had been punished for. He called mass surveillance and autonomous weapons OpenAI’s own “main red lines,” described these as long-standing company principles, and insisted the Pentagon had formally agreed to them in writing. He closed his announcement by calling for these terms to be offered to all AI companies — a position that echoed Anthropic’s arguments more closely than it departed from them.
The hundreds of AI workers who had signed solidarity letters with Anthropic before Altman’s announcement remain the most accurate gauge of industry sentiment. Their warning — that the government was trying to divide the industry to extract compliance — remains relevant regardless of how the OpenAI deal is characterized. Anthropic’s own response was simple and firm: it has never compromised on autonomous weapons or mass surveillance, and it never will, regardless of the competitive consequences.

