
Every year, I’m interested to hear where Gartner sees technology heading for CIOs and other IT leaders. Typically, the company’s IT Symposium/Xpo conference features a session on current trends, which reflects the current state of affairs, and another on predictions for the future.
Gartner Fellow Daryl Plummer set the tone for this year’s discussion by saying we’ve been living in “a world of shattered norms, where we can’t expect anything to be the same as it was last year or the year before.” Here are his predictions, along with my personal observations.
Plummer said employers will want to hire people who are good with AI, which means individuals “should be playing with AI all the time,” including prompting. “AI provides a productivity and creativity boost to us all, but only if we ask it the right questions in the right way,” he said.
However, he said, it’s also important to understand your business processes, as that makes you more effective when you use AI. “You don’t have to worry about losing your job to AI,” he said, “You have to worry about losing your job to someone who uses AI better than you do.”
That last statement rings true to me for most people. Still, I think the percentage of people needing certifications is way too high.
Plummer said that we are losing critical thinking skills, noting that our kids do not remember things as well as we did at their age, because they have become dependent on technology. However, we’ve all experienced the same things happening. “When was the last time you had to read a map?” Or written in cursive? Or driven a manual-shift car?
Atrophy happens faster than you think, he said, and we have to decide which things to let go, and which ones to keep. Individuals with specialized skills will become increasingly rare and valuable.
I pretty much agree with this. And “AI-free” assessments will make sense for some roles in many organizations, but probably not most roles.
Plummer, and indeed many other Gartner analysts at the show, discussed “AI sovereignty” — the goals of different nations or groups of nations to compel organizations to localize solutions through regulations and other means. China, the US, and the European Union are the three primary centers of gravity, he said, with other countries striving to maintain their independence.
He notes that if China has manufacturing data that no one else has, this will ensure it has the best data for manufacturing in AI models. He noted that contextual data will be the lure to lock people in or out of specific models, and that both governments and “digital nation state platforms” are working on this issue.
Gartner suggests employing model distillation as a tactic to mitigate the impact of regional lock-in, as well as examining open-source models. It certainly looks like this is happening.
There was a lot of conversation about multi-agent systems at the show, and Plummer said that such systems, which understand context, will help win customers. He noted that customers dislike tedious service tasks, and improving them helps everyone. Many tasks will be delegated to AI agents, even if the customer doesn’t know it, and with such systems, you have lower effort but higher satisfaction and retention, he argued.
However, in talking to attendees, I found that most aren’t ready for multi-agent systems in general. But lots of organizations are working on or rolling out agents for customer service.
It’s easy to imagine agents handling procurement for business-to-business products, as they can communicate more efficiently with other agents. This will have lots of impact, such as AI agent optimization supplanting traditional search engine optimization (SEO) and new AI native machine-to-machine (M2M) products and services. Therefore, Plummer said, we should design our processes for AI agents, not humans. He also introduced the concept of agent intermediation and agent exchanges, although he noted that there was disagreement within Gartner about when agent exchanges would emerge.
I think this comes down to how you count buying. I expect there will be agents — even if they are just what used to be called robotic process automation (RPA) — in many buying decisions.
“I don’t want AI acting as a therapist, but it is,” Plummer said. He noted that there weren’t enough guardrails in place, so we all need to be careful.
He noted that “black box agentic AI” risks going astray, so he suggested we need to balance information with safety, including proper guardrails against bad behavior. One way to do this is to install “guardian agents” that oversee other agents. It’s also important to have good quality data, explainable AI, and ethical deployment, along with other “quality assurance plans for the agents.” Unfortunately, I agree with this prediction.
Plummer described a situation where AI capabilities are embedded directly into money, so that if you pay someone and they don’t meet the terms and conditions, the money will automatically be returned to you. In this situation, value exchanges will evolve according to the business context, and money will be more akin to in-world video game gear than cash.
Count me as skeptical. I’ve been hearing about smart contracts for years — see Gartner’s predictions for 2020 — and they still aren’t mainstream. They will grow, but I don’t see this happening nearly as much by 2030.
The idea here is that business-to-business services are based on labor arbitrage — how much it costs to hire the people who work on the project versus how much value the organization gets out of it. AI agents will change all of this, so service companies will be able to charge less but also keep a larger share of the profits, Plummer said.
He noted that AI agents will be able to discover tacit knowledge from within organizations, and this will lead to new value. He expects to see continuous innovation-based pricing not limited by labor costs. He suggests that organizations hiring service companies be cautious in protecting their internal knowledge with non-disclosure agreements and similar measures.
I think this is possible, but probably not as quickly as Gartner is predicting. It just takes longer for organizations to adapt.
Plummer said that every jurisdiction is looking at AI regulation as part of the push toward AI sovereignty. Over 1,000 AI laws were proposed last year, and it’s nearly impossible to keep up, especially since he says no two governing organizations have a consistent definition of AI. He noted that AI governance can become an enabler or a barrier.
“I believe governments will start taxing the use of AI because they need to limit how much capacity they’re using and what people are doing with it,” he said. Ultimately, he said, businesses will find ways of making money through different government mechanisms ostensibly meant to protect us. AI literacy is crucial for understanding this, and to stay safe, he recommends creating a “mind map” of potential laws and regulations.
Increased regulation is almost a given, and the companies will certainly have to invest in compliance. My guess is that taxing AI specifically takes longer.
At both the keynote and here, Plummer said he wanted a PowerPoint assistant that would create his slides based on his talking about what he wants. But he doesn’t have that.
“Why the heck are we doing that to ourselves?” he asked, saying we need to reinvent the experience where AI knows what you’re doing, knows the context, knows who you are talking to, what files you use, and what information you need.
“Gen AI and AI agent use is going to create the first true challenge to mainstream productivity tools in 30 years,” he said. Microsoft is the company he wants to make the change, but they will have to “step up” and give us a Gen AI experience in productivity. “They need to be the ones to set the new dominant design that we all want to follow,” with new UI, plug-ins, document types, and formats, he said. “If there are new vendors, it’s because the ones that should have done it didn’t do it. I’m willing to bet they’re smarter than I am, and they can figure this out.”
Should an agent be able to build PowerPoint slides in a smart way? Do I want Microsoft to make Copilot in the Office (M365) suite better? Of course. I agree that others will try to reinvent the basic productivity apps to make better use of AI, and if Microsoft doesn’t, there will be disruption. Copilot has a long way to go, but I’m expecting Microsoft will continue to make it better.
Read more on PC Mag Middle East

