Enterprises have lofty aspirations for AI agents, but the risks are high, too. To mitigate potential dangers, CIOs have retooled their governance playbooks.
“Traditional AI was static — you trained it, deployed it, monitored it,” Bryan McGowan, trusted AI leader at KPMG U.S., told CIO Dive. “But agentic AI systems can perceive, reason, plan and even act autonomously.”
For now, AI agents remain mostly in the IT, risk and operations functions, such as quality assurance and fraud prevention, KPMG found. Walmart, for example, is using AI agents in its technology department to identify accessibility gaps in code and accelerate software development broadly.
In addition to human-in-the-loop oversight and limiting sensitive data access, a majority of leaders are also accessing the technology via trusted providers to hedge risk, as reported by 74% of leaders in KPMG’s Q3 survey.
Enterprises often pursue agentic AI adoption with help from existing partners. Athina Kanioura, EVP and chief strategy and transformation officer at PepsiCo, emphasized the importance of technology vendor relationships with enterprises in interviews with CIO Dive earlier this year.
“Organizations are bolstering their foundational controls like zero trust, immutable logging, chain-of-thought transparency and fail-safe protocols — not because the technology demands it, but because the stakes do,” McGowan said in an email.
Analyst firm Gartner believes that adding AI agents to operations leads to weakened cybersecurity standing, reducing the time it takes to exploit authentication channels by 50% over the next two years.

