Pentagon may designate Anthropic as 'Supply Chain Risk': What this means for the company, its customers and partners
The US Department of Defence may soon designate the Claude developer Anthropic a “supply chain risk”. This classification would require anyone doing business with the military to cut ties with the AI company, a senior Pentagon official told Axios. Defence Secretary Pete Hegseth is reportedly nearing a decision to sever business ties with Anthropic.
The designation is typically reserved as a penalty for foreign adversaries. “It will be an enormous pain in the a** to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” the senior official added.
Chief Pentagon spokesman Sean Parnell told Axios, “The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
The potential move carries significant implications. Anthropic's Claude is currently the only AI model available in the military's classified systems and was reportedly used during the US Army’s January raid on Venezuelan ex-president Nicolas Maduro. Pentagon officials have praised Claude's capabilities, making any disentanglement a complex undertaking for the military and its partners.
Anthropic’s supply chain risk designation from the Pentagon would require the companies doing business with the US Department of Defence to certify that they do not use Claude in their workflows.
Given that Anthropic recently said eight of the ten largest US companies use Claude, the impact could extend well beyond the military.
The Pentagon contract under threat is valued at up to $200 million, a small portion of Anthropic's $14 billion in annual revenue. However, a senior administration official noted that competing models "are just behind" when it comes to specialised government applications, which may even complicate any abrupt switch.
The move also sets the tone for the Pentagon's negotiations with OpenAI, Google, and xAI, all of which have agreed to remove safeguards for use in the military's unclassified systems but are not yet used for more sensitive classified work.
A senior administration official said the Pentagon is confident the three companies will agree to the "all lawful use" standard. However, a source familiar with those discussions said much remains undecided.
Anthropic and the Pentagon have held months of contentious negotiations over the terms under which the military can use Claude. Anthropic is prepared to loosen its current terms of use but wants to ensure its tools are not used to conduct mass surveillance on Americans or to develop autonomous weapons with no human involvement.
The Pentagon has argued that those conditions are unduly restrictive and would be unworkable in practice, insisting that Anthropic and three other AI companies, like OpenAI, Google, and xAI, allow military use of their tools for "all lawful purposes".
A source familiar with the situation said senior defence officials have been frustrated with Anthropic for some time and embraced the opportunity to make the dispute public.
Privacy advocates have raised concerns on the other side, noting that existing mass-surveillance laws do not account for AI. The Pentagon already collects large amounts of personal data, from social media posts to concealed carry permits, and there are concerns that AI could significantly expand that authority to target civilians.
Commenting on the situation, an Anthropic spokesperson said, “We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right."
The spokesperson noted that Claude was the first AI model to be used on classified networks, reiterating the company's commitment to applying frontier AI for national security.
Chief Pentagon spokesman Sean Parnell told Axios, “The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
The potential move carries significant implications. Anthropic's Claude is currently the only AI model available in the military's classified systems and was reportedly used during the US Army’s January raid on Venezuelan ex-president Nicolas Maduro. Pentagon officials have praised Claude's capabilities, making any disentanglement a complex undertaking for the military and its partners.
What Pentagon’s 'Supply Chain Risk' designation will mean for Anthropic, its partners and customers
Given that Anthropic recently said eight of the ten largest US companies use Claude, the impact could extend well beyond the military.
The Pentagon contract under threat is valued at up to $200 million, a small portion of Anthropic's $14 billion in annual revenue. However, a senior administration official noted that competing models "are just behind" when it comes to specialised government applications, which may even complicate any abrupt switch.
The move also sets the tone for the Pentagon's negotiations with OpenAI, Google, and xAI, all of which have agreed to remove safeguards for use in the military's unclassified systems but are not yet used for more sensitive classified work.
A senior administration official said the Pentagon is confident the three companies will agree to the "all lawful use" standard. However, a source familiar with those discussions said much remains undecided.
What made the Pentagon punish Anthropic with the 'Supply Chain Risk' designation
Anthropic and the Pentagon have held months of contentious negotiations over the terms under which the military can use Claude. Anthropic is prepared to loosen its current terms of use but wants to ensure its tools are not used to conduct mass surveillance on Americans or to develop autonomous weapons with no human involvement.
The Pentagon has argued that those conditions are unduly restrictive and would be unworkable in practice, insisting that Anthropic and three other AI companies, like OpenAI, Google, and xAI, allow military use of their tools for "all lawful purposes".
A source familiar with the situation said senior defence officials have been frustrated with Anthropic for some time and embraced the opportunity to make the dispute public.
Privacy advocates have raised concerns on the other side, noting that existing mass-surveillance laws do not account for AI. The Pentagon already collects large amounts of personal data, from social media posts to concealed carry permits, and there are concerns that AI could significantly expand that authority to target civilians.
Commenting on the situation, an Anthropic spokesperson said, “We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right."
The spokesperson noted that Claude was the first AI model to be used on classified networks, reiterating the company's commitment to applying frontier AI for national security.
Top Comment
M
Muralidhar
1 day ago
Donald Trump wants everyone in US to bow down and praise his Lordship. He wants all countries to do the same to him.Read allPost comment
Popular from Technology
- Google AI CEO Demis Hassabis just tells everyone why Gemini and ChatGPT winning International Maths Olympiad is not big, as they can still ...
- Google’s ex CEO Eric Schmidt warns America: We are running out of …
- AI knows how caste works in India. Here’s why that’s a worry
- Facebook founder Mark Zuckerberg's $50 million donation to California may be billionaire couple's 'parting gift' to the State
- Elon Musk replies to ‘internal email’ where he asked Nvidia CEO Jensen Huang for early unit of its supercomputer
end of article
Trending Stories
- 60TB billing data, 1.77 lakh restaurant IDs: How AI uncovered Rs 70,000 crore tax evasion scam starting with Hyderabad biryani chains
- 'Who will pay for it?': SC raps Tamil Nadu govt for promising free electricity; flags 'freebie' politics
- Why is stock market down today? Nifty50 goes below 25,600; BSE Sensex down over 80 points - top reasons for crash
- New Penal Code Issued: Taliban allows domestic violence; legal safeguards for women removed
- T20 World Cup 2026 Fixtures: Full Super 8 schedule for all teams with match timings and venues
- Income Tax notice alert! Senior executives with over Rs 50 lakh salaries under radar for ‘underreporting income’, misusing exemptions
- Gold, Silver Rate Today Live Updates: Gold, silver prices well below record highs; what's the outlook now?
Featured in technology
- Hack of the day: Verify LPG cylinder subsidy transfers
- Quote of the day by Satya Nadella: “I think playing cricket taught me more about working in teams and leadership that has stayed with me throughout my career.”
- PhonePe launches biometric payments for transactions up to Rs 5,000: Here’s how to enable
- Samsung Health partners with PharmEasy and Tata 1mg, brings Find Care feature
- Dyson PencilWash wet and dry vacuum cleaner launched
- US is planning to launch a website that may make governments across Europe quite angry, as it will allow Europeans to …
Photostories
- PM Narendra Modi's India-AI Impact Summit-2026 dinner celebrated Shishir Ritu: Look what was served to delegates
- From Riteish Deshmukh to Rishab Shetty: Actors who portrayed and are set to depict Chhatrapati Shivaji Maharaj on screen
- 62,000 motorists ticketed: Massive traffic crackdown on Mumbai Coastal Road
- 5 things that make Kashmir a great March destination
- From divorce to deadly revenge: Pregnant Hyderabad techie's murder shocks Hyderabad IT community
- Top 5 real estate hubs in Indore in 2026: A guide for investors and homebuyers
- Marriage Horoscope 2026: Who may take the next step
- Spider-Man movies to watch on OTT in India before 'Spider-Man - Brand New Day' releases
- Which Hindu demon energy you have according to your birth number
- 'Chhaava', 'Tanhaji-The Unsung Warrior': Bollywood films that glorify Chhatrapati Shivaji Maharaj's bravery
Up Next