US draws up strict new AI guidelines amid Anthropic clash, FT reports - Reuters
US draws up strict new AI guidelines amid Anthropic clash, FT reports Reuters
Coverage by Political Leaning
See how different sides of the spectrum reported this story
Key People
No people linked to this story
Locations
All Coverage
<a href="https://news.google.com/rss/articles/CBMixgFBVV95cUxQSVhEc0FodDNnOUhkcldIOGxUYmhTVURXRjU1a1pwbEVPLWxHWThreUtTQTlUTU02X0txaFZmQkJDTEZiMklaRTM0RldpMUxYTWVpeWlvOEc5cVAwYUZQalQxTzhKQ3hReGxIZlVuMWN4cUxMWGEwLWhxcnh5Nm1ZRXFwZ2IxWU9uS2RiYWgwbE1yMmItNHduc3NrQ01UZGR0SHJTYUhLbVpDS0NRWG94Y3JBcFoxaDV6VlNFOE5pR2xpbElKZmc?oc=5" target="_blank">US draws up strict new AI guidelines amid Anthropic clash, FT reports</a> <font color="#6f6f6f">Reuters</font>
The Pentagon has designated Anthropic as a 'supply chain risk,' barring its technology from military use, following Anthropic's refusal to enable military use of its AI in mass surveillance and autonomous weapons.
Despite the U.S. government's designation of Anthropic as a 'supply chain risk,' leading to a ban on its technology, Anthropic's AI assistant Claude is experiencing a surge in popularity, with over a million users signing up daily.
The Trump administration has developed stringent rules for civilian AI contracts, requiring companies to permit 'any lawful' use of their models, amid a standoff with Anthropic over AI safeguards.
The Trump administration has developed stringent rules for civilian AI contracts, requiring companies to permit 'any lawful' use of their models, amid a standoff with Anthropic over AI safeguards.
The Trump administration has developed stringent rules for civilian AI contracts, requiring companies to permit 'any lawful' use of their models, amid a standoff with Anthropic over AI safeguards.
The Boston Globe discusses the complexities of the dispute between the U.S. government and Anthropic over AI usage restrictions, highlighting differing perspectives on AI governance and national security.
The Trump administration has developed strict rules for civilian AI contracts, requiring companies to allow 'any lawful' use of their models amid a standoff between the Pentagon and Anthropic.
The Pentagon designated Anthropic a 'supply-chain risk' on March 5, barring government contractors from using the AI firm's technology in work for the US military.
Emil Michael, the Pentagon's chief technology officer, disclosed tensions with Anthropic over the company's ethical restrictions on AI use in fully autonomous weapons, leading to the Pentagon designating Anthropic as a supply chain risk.
A draft of the new guidelines reviewed by the Financial Times says AI groups seeking to do business with the government must grant the U.S. an irrevocable license to use their systems for all legal purposes.
Similar Stories
Related coverage based on topic and tags
Exclusive: Anthropic weighs building its own AI chips, sources say - Reuters
Exclusive: Anthropic weighs building its own AI chips, sources say Reuters
April 9, 2026 at 11:00 PMAnthropic talking to the Trump administration about its next AI model, co-founder says - Reuters
Anthropic talking to the Trump administration about its next AI model, co-founder says Reuters
April 13, 2026 at 07:46 PMUS court declines to block Pentagon's Anthropic blacklisting for now - Reuters
US court declines to block Pentagon's Anthropic blacklisting for now Reuters
April 8, 2026 at 11:31 PMCoreWeave strikes AI cloud deal with Anthropic, shares rise - Reuters
CoreWeave strikes AI cloud deal with Anthropic, shares rise Reuters
April 10, 2026 at 01:40 PMHormuz blockade, Supreme Court retirements and powering AI with coal - Reuters
Hormuz blockade, Supreme Court retirements and powering AI with coal Reuters
April 12, 2026 at 03:07 PMEU weighing tighter regulation for OpenAI under Digital Services Act - Reuters
EU weighing tighter regulation for OpenAI under Digital Services Act Reuters
April 10, 2026 at 07:18 PM