Sam Altman-led
OpenAI recently signed an agreement with the US Department of War to deploy technology on the department's classified network. However, the move has sparked immediate backlash for the AI company with thousands of ChatGPT users sharing posts on Reddit claiming to be cancelling their ChatGPT subscriptions. Posts titled “You're now training a war machine. Let's see proof of cancellation”, and “Time to cancel ChatGPT Plus after three Years. Anthropic got nuked for having ethics, and Sam Altman instantly swooped in for the Pentagon bag” have got thousands of upvote on the platform.
Aidan Gold, an X user shared a post revealing the irony of how OpenAI, which backed Anthropic’s stand for AI safety, swooped in by submitting a bid to replace Anthropic in the Pentagon contract.
“Let me get this straight: Anthropic refused to work with DoW unless they could promise their tech wasn't used for surveillance or killing. DoW said that they need full capabilities. Anthropic declined to give full access. OpenAI stood by Anthropic for ensuring AI safety. Trump then cancelled all Anthropic usage across the government, including a $200m contract. OpenAI then submits a bid to replace Anthropic,” the user wrote.
'Biggest Mistake Young People Make...': OpenAI CEO Sam Altman Shares Blunt Take On AI At IIT Delhi
Another user shared a post:
“11:59 We stand in solidarity ANthropic
12:00 Actually this contract looks very promising
12:01 hey investors you guys wanna hop in this train? we are making some killer bots
->100b investment
-> government contract
LMAO”
Sam Altman addresses criticism of OpenAI's Pentagon deal
Pushing back against the criticism of his company’s deal with the US Department of War, OpenAI CEO Sam Altman insisted that the agreement includes stronger safety guardrails than those Anthropic refused to accept before being backlisted. In a blog post published on Saturday (February 28), OpenAI shared excerpts of its contract language, highlighting clauses that explicitly prohibit the use of its AI models for mass domestic surveillance, fully autonomous weapons or high-stakes decision systems such as social credit scores.
“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. We retain full discretion over our safety stack; we deploy via the cloud; cleared OpenAI personnel are in the loop; and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law," the post read.