Ethereum Foundation AI lead Davide Crapis and Ethereum co-founder Vitalik Buterin have outlined a proposal to use zero-knowledge proofs and related cryptographic techniques to make interactions with large language models private, while still preventing spam and abuse.
In a blog post published Wednesday, the pair addressed the privacy, security and efficiency challenges surrounding API calls — the requests made each time a user sends a message to an application such as an AI chatbot.
They argued for a system that would allow users to fund an account once and then make thousands of API calls anonymously and securely.
“We need a system where a user can deposit funds once and make thousands of API calls anonymously, securely, and efficiently,” they wrote.
At the same time, the proposal aims to protect service providers by ensuring guaranteed payment and safeguards against spam, while preventing user requests from being linked to their identity or correlated with one another.
“The provider must be guaranteed payment and protection against spam, while the user must be guaranteed that their requests cannot be linked to their identity or to each other,” they added.

As the use of AI chatbots continues to surge, concerns over data leaks from large language models (LLMs) have intensified. These systems frequently process highly sensitive information, and linking user activity to real-world identities can pose significant privacy, legal and security risks. In some cases, usage logs may even be subpoenaed and used in court proceedings.
Crapis and Buterin’s proposed solution
Crapis and Buterin argue that AI service providers are currently stuck between two “suboptimal” options: identity-based access, which requires users to share personal information such as email addresses or credit card details — increasing privacy risks — or per-request on-chain payments, which can be slow, expensive and publicly traceable.
To address this, they propose a system in which users deposit funds into a smart contract and then make API calls without revealing their identity or linking individual requests. The framework would rely on zero-knowledge proofs and rate-limit nullifiers to enable secure payments while preventing spam.
Under the model, a user could deposit 100 USDC into a smart contract and make 500 queries to a hosted LLM. The provider would receive 500 verified, paid requests, but would be unable to connect them to the same user — or to each other — while the user’s prompts would remain unlinkable to their identity, Crapis and Buterin explained.
“The model enforces solvency by requiring the user to prove that their cumulative spending — represented by their current ticket index —remains strictly within the bounds of their initial deposit and their verified refund history.”
Violations Could Result in Deposit Slashing
To discourage fraud, illegal content generation, jailbreaking attempts and other terms-of-service breaches, Crapis and Buterin propose a dual-staking mechanism.
Under the system, users who attempt to double-spend could have their entire deposit claimed by anyone — including the service provider. In cases where users violate a platform’s usage policies, their deposit would instead be sent to a burn address, with the slashing event permanently recorded on-chain.
“For example, a user might submit a prompt asking the model to generate instructions for building a weapon or to help them bypass security controls — requests that would violate many providers’ usage policies,” Crapis and Buterin wrote.
“While the user’s identity remains hidden, the community can audit the rate at which the Server burns stakes and the posted evidence for these burns.”

