MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Anthropic v the US military: what this public feud says about the use of AI in warfare
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$66,941.001.62%
  • ethereumEthereum(ETH)$1,963.031.71%
  • tetherTether(USDT)$1.000.04%
  • binancecoinBNB(BNB)$616.540.45%
  • rippleXRP(XRP)$1.381.76%
  • usd-coinUSDC(USDC)$1.000.01%
  • solanaSolana(SOL)$84.252.94%
  • tronTRON(TRX)$0.282005-0.28%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.03-1.87%
  • dogecoinDogecoin(DOGE)$0.0940570.85%
Press Releases

Anthropic v the US military: what this public feud says about the use of AI in warfare

Last updated: February 26, 2026 10:40 pm
Published: 2 days ago
Share

The very public feud between the US Department of Defense (also known these days as the Department of War) and its AI technology supplier Anthropic is unusual for pitting state might against corporate power. In the military space, at least, these are usually cosy bedfellows.

The origin of this disagreement dates back months, amid repeated criticisms from Donald Trump’s AI and crypto “czar”, David Sacks, about the company’s supposedly woke policy stances.

But tensions ramped up following media reports that Anthropic technology had been used in the violent abduction of former Venezuelan president Nicolás Maduro by the US military in January 2026. It was alleged this caused discontent inside the San Francisco-based company.

Anthropic has denied this, with company insiders suggesting it did not find or raise any violations of its policies in the wake of the Maduro operation.

Nonetheless, the US secretary of defense, Pete Hegseth, has issued Anthropic with an ultimatum. Unless the company relaxes its ethical limits policy by 5.01pm Washington time on Friday, February 27, the US government has suggested it could invoke the 1950 Defense Production Act. This would allow the Department of Defense (DoD) to appropriate the use of this technology as it wishes.

At the same time, Anthropic could be designated a supply chain risk, putting its government contracts in danger. These extraordinary measures may appear contradictory, but they are consistent with the current US administration’s approach, which favours big gestures and policy ambiguity.

At the heart of the dispute is the question of how Anthropic’s large language model (LLM) Claude is used in a military context. Across many sectors of industry, Claude does a range of automated tasks including writing, coding, reasoning and analysis.

In July 2024, US data analytics company Palantir announced it was partnering with Anthropic to “bring Claude AI models … into US Government intelligence and defense operations”. Anthropic then signed a US$200 million (£150 million) contract with the DoD in July 2025, stipulating certain terms via its “acceptable use policy”.

These would, for example, disallow the use of Claude in mass surveillance of US citizens or fully autonomous weapon systems which, once activated, can select and engage targets with no human involvement.

According to Anthropic, either would violate its definition of “responsible AI”. Hegseth and the DoD have pushed back, characterising such limits as unduly restrictive in a geopolitical environment marked by uncertainty, instability and blurred lines.

Responsible AI should, they insist, encompass “any lawful use” of AI models by the US military. A memorandum issued by Hegseth on January 9 2026 stated:

Diversity, Equity and Inclusion and social ideology have no place in the Department of War, so we must not employ AI models which incorporate ideological ‘tuning’ that interferes with their ability to provide objectively truthful responses to user prompts.

The memo instructed that the term “any lawful use” should be incorporated in future DoD contracts for AI services within 180 days.

Anthropic’s competitors are lining up

Anthropic’s red lines do not rule out the mass surveillance of human communities at large – only American citizens. And while it draws the line at fully autonomous weapons, the multitude of evolving uses of AI to inform, accelerate or scale up violence in ways that severely limit opportunities for moral restraint are not mentioned in its acceptable use policy.

At present, Anthropic has a competitive advantage. Its LLM model is integrated into US government interfaces with sufficient levels of clearance to offer a superior product. But Anthropic’s competitors are lining up.

Palantir has expanded its business with the Pentagon significantly in recent months, giving rise to more AI models.

Meanwhile, Google recently updated its ethical guidelines, dropping its pledge not to use AI for weapons development and surveillance. OpenAI has likewise modified its mission statement, removing “safety” as a core value, and Elon Musk’s xAI (creator of the Grok chatbot) has agreed to the Pentagon’s “any lawful use” standard.

A testing point for military AI

For C.S. Lewis, courage was the master virtue, since it represents “the form of every virtue at the testing point”. Anthropic now faces such a testing point.

On February 24, the company announced the latest update to its responsible scaling policy – “the voluntary framework we use to mitigate catastrophic risks from AI systems”. According to Time magazine, the changes include “scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance”.

Anthropic’s chief science officer, Jared Kaplan, told Time: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

Ethical language saturates the press releases of Silicon Valley companies eager to distinguish themselves from “bad actors” in Russia, China and elsewhere. But ethical words and actions are not the same, because the latter often entails a real-world cost.

That such a highly public spectacle is happening at this time is perhaps no accident. In early February, representatives of many countries – but not the US – came together for the third time to find ways to agree on “responsible AI” in the military domain. And on March 2-6, the UN will convene its latest conference discussing how best to limit the use of emerging technologies for lethal autonomous weapons systems.

Such legal and ethical debates about the role of AI technology in the future of warfare are critical, and overdue. Anthropic deserves credit for apparently resisting the US military’s efforts to undercut its ethical guidelines. But AI’s role is likely to be tested in many more conflict situations before agreement is reached.

Read more on The Conversation

This news is powered by The Conversation The Conversation

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

eBay Inc. Reports Second Quarter 2025 Results
Strategic Communication for Strategic Autonomy: The Challenge of Trump’s Gun-a-Blazing Intervention
Korean Petition Secures 30 US Media Reports Despite Japan
EQS-PVR: Alzchem Group AG: Release according to Article 40, Section 1 of the WpHG [the German Securities Trading Act] with the objective of Europe-wide distribution – boerse.de
Underperforming NHS Trusts Exposed: Discover How Your Local Healthcare Facility Measures Up on Critical Cancer Diagnosis and Treatment Timelines – Internewscast Journal

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Gucci’s AI Ads Spark Backlash: Balancing Craftsmanship & Modernity – News Directory 3
Next Article Vusion – 2025 Annual Results: Strong growth in revenue and profitability
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d