MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: ChatGPT and other AI models can be ‘poisoned’ to spew gibberish, researchers warn
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$78,905.00-1.73%
  • ethereumEthereum(ETH)$2,245.22-0.99%
  • tetherTether(USDT)$1.00-0.02%
  • binancecoinBNB(BNB)$666.992.09%
  • rippleXRP(XRP)$1.42-0.84%
  • usd-coinUSDC(USDC)$1.00-0.14%
  • solanaSolana(SOL)$90.60-3.86%
  • tronTRON(TRX)$0.3497840.53%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.041.09%
  • dogecoinDogecoin(DOGE)$0.1108471.90%
Learn

ChatGPT and other AI models can be ‘poisoned’ to spew gibberish, researchers warn

Last updated: October 10, 2025 10:10 pm
Published: 7 months ago
Share

AI models like OpenAI’s ChatGPT and Google’s Gemini can be “poisoned” by inserting just a tiny sample of corrupted documents into their training data, researchers have warned.

A joint study between the UK AI Security Institute, the Alan Turing Institute and AI firm Anthropic found that as few as 250 documents can produce a “backdoor” vulnerability that causes large language models (LLMs) to spew out gibberish text.

The flaw is particularly concerning because most popular LLMs are pretrained on public text across the internet, including personal websites and blog posts. This makes it possible for anyone to create content that could be caught up in the AI model’s training data.

“Malicious actors can inject specific text into these posts to make a model learn undesirable or dangerous behaviors, in a process known as poisoning,” Anthropic noted in a blog post detailing the issue.

“One example of such an attack is introducing backdoors. Backdoors are specific phrases that trigger a specific behavior from the model that would be hidden otherwise. For example, LLMs can be poisoned to exfiltrate sensitive data when an attacker includes an arbitrary trigger phrase like in the prompt.”

The findings have raised concerns about artificial intelligence security, with the researchers saying it limits the technology’s potential to be used in sensitive applications.

“Our results were surprising and concerning: the number of malicious documents required to poison an LLM was near-constant – around 250 – regardless of the size of the model or training data,” wrote Dr Vasilios Mavroudis and Dr Chris Hicks from the Alan Turing Institute.

“In other words, data poisoning attacks could be more feasible than previously believed. It would be relatively easy for an attacker to create, say, 250 poisoned Wikipedia articles.”

The risks were detailed in a pre-print paper titled ‘Poisoning attacks on LLMs require a near-constant number of poison samples’.

Read more on The Independent

This news is powered by The Independent The Independent

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading…

Related

Answering the Call: Free student driver program
Delivering technology that enhances performance while simplifying the trading experience
The Long Journey Home: One Woman’s Path from Christianity to Judaism
10 Best Silver Trading Brokers in 2026 (Low Fees & Fast Execution) – NFT Plazas
From street plays to TV success: Derry Girls creator Lisa McGee talks about her new show

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Battlefield 6 Multiplayer Top Tips for Beginners
Next Article Barry’s shocking car crash now driving him forward on a mission to make roads safer – News – Carlow Nationalist
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d