MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Anthropic accuses Chinese AI labs of ‘data theft’ | ForkLog
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$74,363.00-1.43%
  • ethereumEthereum(ETH)$2,273.42-2.33%
  • tetherTether(USDT)$1.00-0.02%
  • rippleXRP(XRP)$1.40-1.47%
  • binancecoinBNB(BNB)$619.64-0.34%
  • usd-coinUSDC(USDC)$1.00-0.01%
  • solanaSolana(SOL)$83.98-1.60%
  • tronTRON(TRX)$0.3317750.88%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.040.00%
  • dogecoinDogecoin(DOGE)$0.094189-0.51%
Smart Contracts

Anthropic accuses Chinese AI labs of ‘data theft’ | ForkLog

Last updated: February 24, 2026 2:50 pm
Published: 2 months ago
Share

Anthropic says Chinese labs used Claude en masse to distil capabilities into rival models.

Anthropic accused three Chinese AI startups — DeepSeek, Moonshot and MiniMax — of running a large-scale campaign to use Claude to improve their own models.

Labs from China generated more than 16 million interactions with the chatbot via roughly 24,000 fraudulent accounts, violating terms of use and regional restrictions.

“We have a high degree of confidence linking each campaign to a specific firm based on correlations of IP addresses, request metadata, infrastructure signals and confirmations from partners in the industry. They targeted the most unique capabilities of Claude: agentic reasoning, tool use and programming,” Anthropic said.

The firms used distillation — training a less powerful neural network on the outputs of a stronger one.

It is a widely used and legitimate method. Leading AI labs regularly distil their own models to create compact, cheaper versions for clients.

“However, it can also be used illegally: competitors improve capabilities at the expense of another’s LLM in a fraction of the time and cost compared with developing on their own,” the Anthropic blog says.

The company stressed that the window to respond to such “theft” is narrow, and the threat extends beyond a single firm or region. Addressing it will require swift, coordinated action by industry, regulators and the global AI community.

Anthropic set out the risks. Illegally distilled models do not retain necessary safety mechanisms — creating national-security concerns.

American firms are deploying systems to prevent the use of AI in developing biological weapons, conducting malicious cyberattacks and other dangerous acts. Models created through unlawful distillation do not receive such constraints.

Foreign labs can integrate unprotected capabilities into military and intelligence systems, enabling authoritarian governments to use advanced AI for cyberattacks, disinformation and mass surveillance, the company added.

Anthropic experts backed export controls to preserve US leadership in AI. In their view, distillation attacks undermine these measures, allowing overseas labs to narrow the technology gap.

“Without transparency into such attacks, the rapid progress of Chinese labs is mistakenly read as proof that export controls are ineffective. In practice, their achievements largely depend on extracting capabilities from American models, and scaling this approach requires access to cutting-edge chips,” the company blog says.

Anthropic CEO Dario Amodei will meet Defence Secretary Pete Hegseth at the Pentagon to discuss ways the company’s AI models could be used by the military.

The sides have fallen out of late — Anthropic opposes using AI for mass surveillance of US citizens and for building autonomous weapons. The Defence Department has made clear it intends to use LLMs “for all lawful scenarios” without restrictions.

It has gone so far that the Pentagon said it may terminate its contract with Anthropic.

Shares of leading publicly listed cybersecurity firms fell after Anthropic launched Claude Code Security — an AI code vulnerability scanner.

The company says the new service “analyses the entire codebase for vulnerabilities, verifies each finding to minimise false positives and proposes fixes”.

Claude conducts analysis “like an experienced security researcher”: it understands context, traces data flows and detects vulnerabilities.

According to VentureBeat, Claude Opus 4.6 identified more than 500 critical vulnerabilities that had persisted for decades despite expert reviews.

The five largest US publicly traded cybersecurity firms by market value posted double-digit declines over the past five days amid the arrival of an AI competitor:

Wedbush analysts said the sell-off reflects worries over the so-called AI Ghost Trade. In their view, the market’s reaction is mistaken, and Palo Alto, CrowdStrike and Zscaler will prove their effectiveness in 2026.

In February, OpenAI together with Paradigm introduced EVMbench — a benchmark to assess AI agents’ ability to identify, fix and exploit flaws in smart contracts.

Read more on ForkLog

This news is powered by ForkLog ForkLog

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

Up 99% in 6 Months, Is Solana Still a Buy? | The Motley Fool
Best Meme Coins to Buy: Dogecoin Price Prediction
$76 trend could soon be Australia’s ‘low-cost, quick’ solution to housing crisis
ZKP Takes Center Stage With $5M Giveaway as PENGU Weakens and ETH Remains Under Pressure
New Crypto Mutuum Finance (MUTM) Announces V1 Launch for Q4 2025 as Phase 6 Crosses 80%

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Modern technology: AUBET77 – betheglue.org
Next Article XRPL Dev Explains Hidden XRP Utility as Institutional DeFi Expands On-Chain
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d