At the Web3 Summit 2025 in Berlin on July 17, Polkadot founder Gavin Wood outlined his vision and roadmap for implementing a decentralized identity system aimed at tackling the growing challenges of digital identity in the age of artificial intelligence.
Speaking during his closing keynote on day one of the summit, Wood introduced Proof of Personhood (PoP)—a personalized, on-chain solution designed to enable decentralized human verification. The initiative is part of Polkadot’s broader Individuality framework and will roll out through two key components: DIM1 (Proof of Individuality) and DIM2 (Proof of Verified Individuality).
While Wood did not confirm an official launch date, he revealed that PoP would be supported by a $3 million treasury proposal. The launch will also feature what he described as “the fairest airdrop ever,” signaling a major incentive push to onboard real human users to the network.
Wood emphasized that PoP is a critical web3 primitive designed to bolster sybil resistance and reduce network security costs by ensuring that only verified humans can interact with certain parts of the protocol. In an era where AI-generated content and bot activity are becoming harder to distinguish from real users, Wood sees PoP as a much-needed evolution beyond outdated identity tools like CAPTCHAs, SMS verification, and KYC processes.
The initiative marks a significant step in rethinking digital identity, with Polkadot positioning itself at the forefront of decentralized and AI-resilient verification systems.
Trust and Decentralized Identity in an AI-Driven World
At the Web3 Summit panel titled simply “Trust,” Ian Grigg, inventor and financial cryptographer behind Ricardian Contracts, emphasized that true trust extends far beyond technological assurances.
Grigg argued that trust is fundamentally human—rooted in emotion, uncertainty, and context—and cannot be fully replicated by machines. While technology can facilitate secure interactions, it cannot emulate the emotional and relational aspects that define genuine trust.
He warned against the idea of “automating trust” through artificial intelligence, stating that such attempts often lead to systems that are fragile and insecure. According to Grigg, embedding trust into AI misses the essence of what trust truly is and risks undermining its integrity.
Grigg also highlighted the intrinsic link between trust and identity. To trust someone or something, he explained, we must first clearly understand who or what we’re placing that trust in. This recognition must be grounded in human understanding—not merely in code or protocol.

