
Agentic AI is coming. And our regulatory frameworks have no idea what to do with it.
We’re done. Finished. Deepfake regulation is table stakes now. The government’s proposed draft rules on labeling AI-generated content? Smart. Necessary. Yawn.
The TAKE IT DOWN Act. The NO FAKES Act. EU AI Act transparency requirements. Denmark is treating your face as intellectual property. China mandates labels. These aren’t trivial; they’re foundational. But here’s what nobody’s talking about: while legislators are frantically slapping warning labels on synthetic videos, they’re utterly unprepared for what’s already here.
Agentic AI is coming. And our regulatory frameworks have no idea what to do with it.
You want to know what should top the regulatory list? Autonomous AI systems that can act independently, make financial decisions, negotiate contracts, and execute transactions without meaningful human intervention. Not next year. Now.
The difference is existential. A deepfake is a representation problem; someone’s face on someone else’s body. It’s deceptive, harmful, but ultimately passive content. An agentic AI system is an agency problem. It’s a digital actor that can operate your crypto wallet, execute trades, deploy capital, and negotiate binding agreements. When it fails or goes rogue, there’s no kill switch. There’s no undo.
Consider the scenario that keeps AI safety researchers up at night: Give an AI agent access to cryptocurrency and one instruction: “grow the portfolio.” Unlike traditional banking, where transactions can be frozen, in crypto, with its immutable smart contracts, once an AI deploys a fraudulent contract or initiates a harmful transaction, nobody can stop it. Not the government. Not the platform. Nobody.
This isn’t speculation. These systems exist today, operating within limited parameters but operating nonetheless. Firms are deploying agentic AI to manage workflows, execute transactions, and make decisions at scale. And the regulatory gap? Massive.
Here’s the hard truth: when an agentic AI system causes harm, and it will, who’s responsible? The developer? Is the company deploying it? The person who prompted it? Our legal system defaults to human agency. We don’t have adequate frameworks for non-human agents making material decisions.
Traditional AI regulation focuses on bias detection and transparency, which are necessary for hiring systems and medical diagnostics. But those don’t capture the core risk of autonomous systems: uncontrolled action. You can label a deepfake and mitigate its harm. You cannot label away a rogue transaction or an AI negotiating a binding contract against your interests.
First: Mandatory kill switches. Any autonomous AI system operating in financial, healthcare, or infrastructure domains needs hardware-level, irrevocable human override capabilities. Not soft limits. Not “pause” features. Hard stops that the system itself cannot bypass.
Second: Accountability frameworks for autonomous action. We need clear legal liability structures that define who bears responsibility when an AI agent acts outside its defined parameters or causes unintended harm. This isn’t about blame; it’s about incentive alignment. If companies don’t face meaningful consequences for deploying unsafe autonomous systems, they won’t prioritise safety.
Third: Real-time monitoring and explainability requirements in high-risk domains. Financial services, healthcare, infrastructure, and employment decisions aren’t the place for black-box AI. We don’t need to understand every parameter, but we need to know why a system took an action and what it can do before it does it.
Fourth: Human-in-the-loop requirements for material decisions. Autonomous doesn’t mean unsupervised. There’s a difference between an AI drafting a contract and an AI signing one. Between AI suggesting a treatment and AI administering one. Certain types of decisions still require human judgment, review, and sign-off.
The deepfake rules get all the attention because they’re easy to understand and generate cultural anxiety. Your face is stolen. Nonconsensual imagery. Clear villain. Clear victim. People get it.
But here’s the game: regulators tackle the obvious problems first because they’re politically expedient. They generate headlines. They let lawmakers claim they “did something” about AI harms. Meanwhile, the infrastructure for genuine systemic risk is being built in the background by companies with better resources and faster deployment timelines than government oversight.
The real move is this: deepfake regulation is the warm-up act. You regulate labeling and consent. Good. Necessary baseline. But it’s not where the leverage is.
The leverage is in autonomous AI systems that make decisions, execute transactions, and operate in environments where there’s no human veto waiting in the wings. The leverage is in accountability structures that force builders and deployers to internalize the costs of failure rather than externalizing them.
If regulators want to get ahead of this, really ahead, they need to shift focus now. Build the frameworks for autonomous AI accountability before those systems become embedded in critical infrastructure. Establish kill-switch requirements before they become impossible to retrofit. Define liability before courts create a patchwork of contradictory rulings.
Because, unlike deepfakes, which constitute social harm, agentic AI failures constitute a systemic risk. One rogue autonomous system in the financial sector could cascade. One miscalibrated AI making autonomous hiring decisions could entrench discrimination at scale. One AI agent with access to critical infrastructure could cause physical harm.
The government got deepfakes right. They’re moving in that direction. But they’re solving yesterday’s problem while tomorrow’s is being shipped to production.
That’s what should be next on the regulatory list: not better labels on synthetic media, but real guardrails on real agency.
The question is whether regulators will move fast enough to get ahead of it.

