
Volume of nonhuman transactions ballooning, but Gartner says many firms are faking it
The thing about a trend is, sometimes it’s over even before it’s begun. Two dueling headlines illustrate the point: the first, from a press release, declares that “AI Agents Are the New Insider Threat.” In the second, “Gartner Predicts Over 40 Percent of Agentic AI Projects Will Be Canceled by End of 2027.”
Agentic AI is hot, but for how long is anyone’s guess. The tech world builds bubbles it can live inside of, but what they cook up in there isn’t always a long-term success in the mundane everyday that grinds on outside.
For the best recent example, read up on NFTs. As is the case there, the underlying technology (decentralized ledgers or blockchain) remains potent, but the language and context used to frame it has had its moment in the sun – and, at least in the form of Bored Apes, failed.
Which is to say, the LLMs and deep learning algorithms we call AI may be here to stay, but their final form as a societal staple is still very much in fluid evolution. To quote a recent interview with billionaire Silicon Valley AI investor Peter Thiel, it will probably be “more than a nothing burger,” but “less than the total transformation of our society.”
For now, at least, the threat looms. New research from BeyondID warns that “AI agents may be emerging as the next major insider threat to enterprise security.” Part of the hazard is in the gap between AI deployment and cybersecurity preparedness; a release notes that, “while 85 percent of organizations say they are ‘ready for AI in security,’ fewer than half monitor access or behavior for the AI systems they deploy.”
The issue is that AI agents increasingly operate like digital employees, with more (yes) agency. So while some organizations may be tackling the AI threat with deep learning detection systems, they aren’t yet attuned to an emerging truth: the would-be good AI they’re using to fight would-be bad AI could get out of line. AI agents don’t need to be malicious to be dangerous, BeyondID says; “left unchecked, they can become shadow users with far-reaching access and no accountability.”
“AI is no longer just a tool; it’s acting like a user,” says Arun Shrestha, CEO of BeyondID. “AI agents are logging in, accessing sensitive systems, and making decisions just like human employees, but most security teams are still treating them like static infrastructure. This disconnect is creating a massive security vulnerability that’s hiding in plain sight.”
The findings show that only 30 percent of organizations regularly map these agents to critical assets, but few apply access controls or behavioral monitoring. In the specific context of healthcare, “the data shows worrisome risks, as the industry rapidly adopts AI for diagnostics, scheduling, and patient engagement.” Forty-two percent of healthcare companies failed an identity-related compliance audit, and only 23 percent of healthcare organizations offer passwordless authentication.
The problem is likely to get worse before it gets better. Analysis from Venture Beat says that “traditional identity access management (IAM) architectures can’t scale to secure the proliferation of agentic AI.” They were built to manage the identities of thousands of human users, not “millions of autonomous agents operating at machine speed with human-level permissions.”
The piece suggests that “proximity-based authentication” is augmenting hardware tokens, with solutions like Bluetooth Low Energy (BLE). It points to Cisco’s Duo as a product that demonstrates this innovation at scale: “their proximity verification delivers phishing-resistant authentication using BLE-based proximity in conjunction with biometric verification. This capability, unveiled at Cisco Live 2025, represents a fundamental shift in authentication architecture.”
Other firms have pursued different options for future-proofing their tech against a deluge of AI agents. Ping Identity’s DaVinci orchestration platform processes more than 1 billion authentication events daily, with AI agents accounting for 60 percent of the traffic. It says each verification is completed in under 200 milliseconds. CrowdStrike’s Falcon platform uses behavioral analytics to establish baselines for each agent within 24 hours; deviations trigger automated containment.
For Okta, “identity is security.” So says CEO Todd McKinnon. The digital identity company’s Advanced Server Access “implements redundancy, load balancing and automated failover across identity providers. When primary authentication fails, secondary systems activate within 50 milliseconds.”
Okta has also weighed in on customer identity and access management (CIAM) in its extensive Auth0 Customer Identity Trends Report 2025. CIAM is “in the crosshairs of threat actors, who regard the login box as a path to the information, privileges, and benefits reserved for account holders.”
That said, Okta sees not a reason to quail, but “a new opportunity for CIAM to deliver unique value – as the means to control and help secure the access afforded to AI agents. Customers need to know they can trust AI agents with their personal data – otherwise, the transformative potential of these agents won’t be realized.”
“At best, customers seem hesitant to embrace AI agents – an attitude largely shaped by concerns about privacy and a perceived lack of options if the agent does something wrong or unexpected,” the report says. “When introducing such functionality, it’s important to prioritize security from the start. Pay special attention to the IAM aspects, as AI agents use new and unfamiliar interaction and authentication patterns, and tell your users about how your AI agents are built with security in mind.”
Okta itself has a solution. A release announces the launch of Cross App Access, “a new protocol to secure AI agents, bringing visibility, control, and governance to both agent-driven and app-to-app interactions.”
The company says more AI tools are using protocols like Model Context Protocol (MCP) to connect their AI learning models to relevant data and apps. But these cross-app interactions happen with no oversight, creating a blind spot in enterprise security.
Cross App Access allows AI tools to request access to the internal communication app from Okta, which “evaluates the request against enterprise policies and determines whether the tool is authorized to access that specific user’s internal communication app data. If permitted, Okta issues a token to the AI tool, which it presents to the internal communication app for validation.”
Arnab Bose, chief product officer for the Okta Platform, says that “while we’re actively working with the MCP and A2A communities to improve AI agents’ functionality, their increased access to data and the explosion of app-to-app connections will create new identity security challenges. With Cross App Access, Okta is excited to bring oversight and control to how agents interact across the enterprise.”
Cross App Access is available for select Okta Platform customers as a feature in Q3 of this year.
For the time being, AI agents continue to grab headlines, gain users, and instill fear in the hearts of cybersecurity teams.
In StateTech Magazine, Pierre Mouallem, CISO of Delinea, says that “compromised nonhuman identities may pose a greater threat to government systems than hacked human identities,” and the company’s latest cybersecurity report “estimates the total number of operating nonhuman identities may exceed 45 billion by the end of 2025.”
A report from APIContext says “autonomous agents can orchestrate dozens of parallel API calls in seconds, adapt in real time, and lack the contextual intuition of human developers. This shift exposes gaps in API documentation, drift in specifications, and insufficient safety guardrails, all of which can lead to unpredictable failures, security risks, and degraded system reliability.”
The Harvard Business Review has an editorial arguing that “organizations aren’t ready for the risks of agentic AI,” and that it “introduces compounding risks that, if not managed, can create business and brand-defining disasters.”
Nonetheless, a post from Ory says “AI agents are becoming core to business operations, and notes the launch of Skyfire’s Know Your Agent (KYA) identity framework tailored for AI – among a new wave of tools emerging to address the challenges of a world overrun with bots.
But we will give the last word to Gartner, which puts much of the chatter in perspective.
“Most agentic AI projects right now are early stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” says Anushree Verma, Senior Director Analyst, Gartner. “This can blind organizations to the real cost and complexity of deploying AI agents at scale, stalling projects from moving into production. They need to cut through the hype to make careful, strategic decisions about where and how they apply this emerging technology.”
Gartner estimates only about 130 of the thousands of agentic AI vendors are real, rather than “agent washers” rebranding non-agentic products to keep up with the hype.
“Most agentic AI propositions lack significant value or return on investment (ROI), as current models don’t have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time. Many use cases positioned as agentic today don’t require agentic implementations.”
Gartner still predicts at least 15 percent of day-to-day work decisions will be made autonomously through agentic AI by 2028, and 33 percent of enterprise software applications will include agentic AI by the same year.
But, “in this early stage, Gartner recommends agentic AI only be pursued where it delivers clear value or ROI.” Meaning that, for now, all the nuggets being thrown around about AI agents are best taken with a large grain of salt.

