Stay in the loop

Get the best stories delivered to your inbox. No spam, ever.

Anthropic Sues Trump Administration Over Unprecedented Pentagon ‘Supply Chain Risk’ Label – txtFeed
txtFeed

Anthropic Sues Trump Administration Over Unprecedented Pentagon ‘Supply Chain Risk’ Label

Technology

Title: AI Giant Challenges Government Blacklisting in High-Stakes Legal Battle Over Military AI Ethics

Anthropic, the AI safety company behind the Claude chatbot, has filed lawsuits against the Trump administration seeking to reverse an unprecedented Pentagon decision to designate the company a "supply chain risk." The legal action, filed in both the Northern District of California and the federal appeals court in Washington, DC, alleges the government violated Anthropic's First Amendment rights and exceeded its legal authority.

The dispute centers on two red lines Anthropic drew in its Pentagon contract negotiations: that its AI technology would not be used for mass surveillance of US citizens, and that it would not be deployed in autonomous weapons systems. When the Defense Department, under Secretary Pete Hegseth, insisted on using the technology for "all lawful purposes," talks collapsed, and the Pentagon invoked a supply chain risk authority typically reserved for foreign adversaries.

It marks the first time the federal government is known to have used the supply chain risk designation against a domestic American company. Anthropic's CFO Krishna Rao warned the government's actions "could reduce Anthropic's 2026 revenue by multiple billions of dollars" as the designation effectively bars federal agencies from working with the company.

The case has drawn broad industry support. More than 30 leading AI developers from OpenAI and Google, including Google's chief scientist Jeff Dean, filed a legal brief supporting Anthropic, arguing that "national security is not served by reckless designations" of American technology partners. The irony was not lost on observers that OpenAI itself signed a Pentagon deal hours after the government punished its competitor.

In a twist that underscores the public's reaction, Anthropic's Claude AI app surpassed OpenAI's ChatGPT in the iPhone App Store for the first time following the Pentagon announcement, and the company reported more than a million new daily signups. The legal battle is expected to set important precedents for how AI companies navigate government contracts and ethical boundaries around military use of artificial intelligence.

Key Takeaways:
– Anthropic sued to reverse an unprecedented "supply chain risk" designation by the Pentagon.
– The dispute arose over Anthropic's refusal to allow its AI for mass surveillance or autonomous weapons.
– It's the first time this designation has been used against a US company.
– 30+ AI developers from Google and OpenAI filed briefs supporting Anthropic.
– Claude app downloads surged past ChatGPT following the controversy.

Original source: CNBC

Read the original article

How this was produced: AI-assisted synthesis from cited source, filtered for duplication and low-value rewrites by TxtFeed quality rules.

Comments

No comments yet. Be the first to share your thoughts.

Leave a Comment