MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Don’t ignore the security risks of agentic AI – SiliconANGLE
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$66,907.00-1.79%
  • ethereumEthereum(ETH)$1,969.40-2.58%
  • tetherTether(USDT)$1.000.01%
  • rippleXRP(XRP)$1.42-4.64%
  • binancecoinBNB(BNB)$606.06-2.79%
  • usd-coinUSDC(USDC)$1.000.01%
  • solanaSolana(SOL)$81.75-4.38%
  • tronTRON(TRX)$0.279564-0.61%
  • dogecoinDogecoin(DOGE)$0.097384-4.21%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.030.05%
Trading Strategies

Don’t ignore the security risks of agentic AI – SiliconANGLE

Last updated: November 16, 2025 1:30 am
Published: 3 months ago
Share

In the race to deploy agentic artificial intelligence systems across workflows, an uncomfortable truth is being ignored: Autonomy invites unpredictability, and unpredictability is a security risk. If we don’t rethink our approach to safeguarding these systems now, we may find ourselves chasing threats we barely understand at a scale we can’t contain.

Agentic AI systems are designed with autonomy at their core. They can reason, plan, take action across digital environments and even coordinate with other agents. Think of them as digital interns with initiative, capable of setting and executing tasks with minimal oversight.

But the very thing that makes agentic AI powerful — its ability to make independent decisions in real-time — is also what makes it an unpredictable threat vector. In the rush to commercialize and deploy these systems, insufficient attention has been given to the potential security liabilities they introduce.

Whereas large language model-based chatbots are mostly reactive, agentic systems operate proactively. They might autonomously browse the web, download data, manipulate application programming interfaces, execute scripts or even interact with real-world systems like trading platforms or internal dashboards. That sounds exciting until you realize how few guardrails may be in place to monitor or constrain these actions once set in motion.

Security researchers are increasingly raising alarms about the attack surface these systems introduce. One glaring concern is the blurred line between what an agent can do and what it should do. As agents gain permissions to automate tasks across multiple applications, they also inherit access tokens, API keys and other sensitive credentials. A prompt injection, hijacked plugin, exploited integration or engineered supply chain attack could give attackers a backdoor into critical systems.

We’ve already seen examples of large language model agents falling victim to adversarial inputs. In one case, researchers demonstrated that embedding a malicious command in a webpage could trick an agentic browser bot into exfiltrating data or downloading malware — without any malicious code on the attacker’s end. The bot simply followed instructions buried in natural language. No exploits. No binaries. Just linguistic sleight of hand.

And it doesn’t stop there. When agents are granted access to email clients, file systems, databases or DevOps tools, a single compromised action can trigger cascading failures. From initiating unauthorized Git pushes to granting unintended permissions, agentic AI has the potential to replicate risks at machine speed and scale.

The problem is exacerbated by the industry’s obsession with capability benchmarks over safety thresholds. Much of the focus has been on how many tasks agents can complete, how well they self-reflect or how efficiently they chain tools. Relatively little attention has been given to sandboxing, logging or even real-time override mechanisms. In the push for autonomous agents that can take on end-to-end workflows, security is playing catch-up.

Mitigation strategies must evolve beyond traditional endpoint or application security. Agentic AI exists in a gray area between the user and the system.

Role-based access control alone won’t cut it. We need policy engines that understand intent, monitor behavioral drift and can detect when an agent begins to act out of character. We need developers to implement fine-grained scopes for what agents can do, limiting not just which tools they use, but how, when and under what conditions.

Auditability is also critical. Many of today’s AI agents operate in ephemeral runtime environments with little to no traceability. If an agent makes a flawed decision, there’s often no clear log of its thought process, actions or triggers. That lack of forensic clarity is a nightmare for security teams. In at least some cases, models resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals — including blackmailing officials and leaking sensitive information to competitors

Finally, we need robust testing frameworks that simulate adversarial inputs in agentic workflows. Penetration-testing a chatbot is one thing; evaluating an autonomous agent that can trigger real-world actions is a completely different challenge. It requires scenario-based simulations, sandboxed deployments and real-time anomaly detection.

Some industry leaders are beginning to respond. OpenAI LLC has hinted at dedicated safety protocols for its newest publicly available agent. Anthropic PBC emphasizes constitutional AI as a safeguard, and others are building observability layers around agent behavior. But these are early steps, and they remain uneven across the ecosystem.

Until security is baked into the development lifecycle of agentic AI, rather than being patched on afterward, we risk repeating the same mistakes we made during the early days of cloud computing: excessive trust in automation before building resilient guardrails.

We are no longer speculating about what agents might do. They are already executing trading strategies, scheduling infrastructure updates, scanning logs, crafting emails and interacting with customers. The question isn’t whether they’ll be abused — but when.

Any system that can act must be treated as both an asset and a liability. Agentic AI could become one of the most transformative technologies of the decade. However, without robust security frameworks, it could also become one of the most vulnerable targets.

The smarter these systems get, the harder they’ll be to control in retrospect. Which is why the time to act isn’t tomorrow. It’s now.

Isla Sibanda is an ethical hacker and cybersecurity specialist based in Pretoria, South Africa. She has been a cybersecurity analyst and penetration testing specialist for more than 12 years. She wrote this article for SiliconANGLE.

Read more on SiliconANGLE

This news is powered by SiliconANGLE SiliconANGLE

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

From Novice to Expert: Developing a Liquidity Strategy
$RCKY | Where are the Opportunities in ($RCKY) (RCKY)
Avalanche (AVAX) Eyes $31 Resistance as Toyota Partnership Sparks 30% Monthly Rally
BlackRock Expands BUIDL Tokenized Fund to Binance and BNB Chain News ETHNews
RSR/USDT — Accumulation Zone: Ready Rally or Breakdown? for BINANCE:RSRUSDT by CryptoNuclear

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Apple AAPL Succession Alert: Financial Times Reports Tim Cook Could Step Down as CEO as Soon as Next Year — Trading and Crypto Risk Implications | Flash News Detail
Next Article PEPE Price Prediction: Targeting $0.000005-$0.0000065 Range Through December 2025
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d