
Vitalik Buterin, one of the founders of Ethereum, has proposed a possible solution to one of the greatest problems decentralized autonomous organisations (DAOs) face: not enough people paying attention to governance.
In a recent X post, Buterin said that DAOs commonly have low voter turnout (about 15-25%), which gives a few delegates too much power and makes them easy targets for governance attacks.
He says that personal AI assistants that use large language models (LLMs) could help people make better decisions without having to be involved all the time. Token holders vote on proposals for everything from spending treasury funds to changing the protocol.
But most people are overwhelmed by the sheer number of decisions that need to be made in different areas of expertise. Traditional fixes like delegation give a small group more power, leaving the community as a whole with less say.
Buterin suggests using separate AI agents that are educated on a user’s own data, such as their past works, conversation history, and clear assertions of what they like. These agents would automatically vote in line with the user’s values.
“If a governance mechanism depends on you to make a lot of decisions, a personal agent can do all the necessary voting for you based on preferences that it infers from your writing, conversation history, and direct statements,” Buterin said. When the AI isn’t sure about something vital, it would ask the user directly, giving them the right context to make sure they agree.
Handling personal information poses serious threats to privacy. Buterin stresses the importance of safety, saying that users should put their personal LLM into a safe “black box” setting. The model only gives the final judgement and keeps private information private.
“All of these approaches involve each participant making use of much more information about themselves and possibly submitting much larger inputs.” He said, “This makes it even more important to protect privacy.”
Zero-knowledge proofs, secure multi-party computation (MPC), and trusted execution environments (TEEs) are some of the ways that decentralization can be kept while stopping coercion, bribery, and data leaks.
Buterin talks about more than just personal agents. He also talks about public conversation agents that can summarise debates, AI-enhanced prediction markets that can help predict outcomes, and privacy-focused computing for decisions that involve sensitive information.
These tools are meant to make governance bigger without losing its decentralized nature. AI could make DAOs more durable and successful by addressing attention restrictions. This could lead to more people participating, less risk of centralisation, and more effective DAOs.
There are still disputes going on regarding how DAOs should change, including poor engagement and historical exploitation. AI-assisted frameworks could be a big step toward truly inclusive decentralized decision-making if they are put into place.

