Sam Altman Caught in Fallout from Dario Amodei’s Pentagon Standoff

Indeed, Sam Altman was able to secure an agreement between OpenAI and the US Department of Defense amid Anthropic civil wars with the agency. But in doing so, you may have lost something very important: public goodwill. OpenAI’s CEO admitted as much on social media, admitting that the deal was rushed. “We shouldn’t have pulled this off so quickly. We were honestly trying to defuse things and avoid a very bad outcome, but I think it looked opportunistic and sloppy,” he wrote in X yesterday (March 2).
Altman and Dario Amodei, CEO of Anthropic, were once colleagues at OpenAI. In 2021, Amodei and a group of former employees set out to launch Anthropic, positioning the startup as a safety first over its fiercest commercial competitor. That philosophical difference between the two most influential AI executives in Silicon Valley was on full display in recent weeks during talks with the Pentagon.
Amodei’s emphasis on security was put to the test when Anthropic announced that it would not allow its AI systems to be used for surveillance of US citizens or private strikes without human supervision. After Amodei rejected the Pentagon’s request to use Claude without restrictions, President Donald Trump ordered government agencies to withdraw their use of the chat within six months, and Defense Secretary Pete Hegseth designated Anthropic as a “supply chain risk.”
On the same day Anthropic was banned, Altman unveiled the Pentagon’s new OpenAI deal. The agreement adopted a less rigid stance, allowing the use of AI for all legal purposes while incorporating technical safeguards into the OpenAI models.
Without a contract, Altman struggled to control the narrative. Silicon Valley teamed up with Anthropic following its disagreements with Washington. Labor groups representing 700,000 workers across Amazon, Google and Microsoft last week issued a joint statement urging their employers to “refuse to comply again if they or the border labs they invest in enter into additional contracts with the Pentagon.” A separate open letter signed by nearly 950 Google and OpenAI employees called on their employers to “set aside their differences and unite” in resisting the agency’s demands.
Consumer pushback also appeared in the OpenAI business. Over the weekend, large numbers of users switched from ChatGPT to Claude, pushing the Anthropic app to the top of the US App Store free app rankings ahead of ChatGPT. While Anthropic’s user base remains a fraction of OpenAI’s 900 million active users, the company says its free Claude usage has increased by more than 60 percent since January. The surge in demand even led to a temporary shutdown on March 2.
Facing mounting criticism, Altman has moved to contain the fallout. In addition to admitting that the Pentagon deal appeared to be opportunistic, he announced amendments that expressly prohibit the use of OpenAI systems for home surveillance. He also clarified that the services of this company will not be used by intelligence agencies such as the National Security Agency. Altman, who said he hopes Anthropic will find similar names, described the episode as “a great learning experience” as OpenAI faces “tough decisions going forward.”
The companies’ different approaches to commercial opportunities have caused public friction in the past. Earlier this year, Altman moved back to testing ads on ChatGPT, a move that contradicted Amodei’s decision to keep Claude ad-free and inspired a tongue-in-cheek Super Bowl ad from Anthropic.
Whether Altman’s amendments will change public opinion is uncertain. Anthropic, on the other hand, is used in the present. As ChatGPT users migrate to Claude, the company has introduced a memory import tool designed to facilitate the transfer of data from competitor chatbots—a clear bid to turn controversy into market share.




