
Generative AI redefines trust, Blockchain records reality
– Paul Hsu, Founder and CEO, and Justin Patel, Venture Investor at Decasonic
AI is redefining truth. Truth is no longer black and white. Generative models now produce text, images, audio, and video that often feel indistinguishable from what we once believed to be “real.” At the same time, blockchains have matured into credible infrastructure for recording provenance, coordinating rights, validating identities, and settling value between counterparties. Together, these technologies are forcing a reset in how we think about trust on the internet, how licensing and ownership will evolve, and where, as investors, durable value will be created.
We all feel this shift in technology. It’s the split-second hesitation before believing whether a viral video is actually real. It’s the “CEO” of a company calling a finance director with a request for a wire, with a voice that is indistinguishable from their real voice.. We are moving from the “seeing is believing” era to one where “verifying is surviving.” This is becoming the default state of the consumer internet. When your eyes and ears can be deceived by a single sentence prompt, trust ceases to be a feeling and becomes a necessary utility, one that we must ingrain into the web itself.
For years, the dominant narrative has framed this world as “deepfake versus real,” as if trust could be reduced to a single switch. In practice, that framing breaks down almost immediately. As investors and operators, we’ve seen situations where content looked real, sounded real, and even behaved real, yet still failed under scrutiny. The problem is understanding where something comes from, how it has been transformed, and whether it can be relied on in high-stakes decisions. None of these questions are binary, and in a world saturated with AI-generated content, the old binary collapses. What we need instead is a spectrum of trust that can become shared infrastructure for platforms, creators, regulators, AI agents, and investors.
The numbers confirm that the old binary has already been broken. Deepfake fraud incidents in North America surged over 1,700% between 2022 and 2023. By 2027, the losses from generative AI-driven fraud are projected to hit $40 billion a year. We are currently seeing a flood of synthetic content across all platforms (projected to reach millions of deepfake files shared annually by 2025). This exponential rise in synthetic volume doesn’t just create noise; it destroys the signal for businesses, insurers, and markets, creating a massive opening for builders looking to restore it.
A more productive way to anchor this spectrum is along a primary axis of origin: AI-composed to reality-captured. This distinction matters because most failures of trust we see today don’t come from obvious fabrications, but from ambiguity around where something came from and how far it has drifted from its source. On one end, AI-composed content is created when models assemble words, pixels, or sounds from learned patterns rather than from a specific real-world event. On the other end, reality-captured content originates from cameras, microphones, sensors, system logs, physical supply chains, or human eyewitness accounts at a particular point in time and space. Most content in the future will sit somewhere between these poles. Even what we currently call “raw” will increasingly be adjusted, enhanced, summarized, or translated by models. The spectrum of trust starts with recognizing this gradient rather than pretending that everything must neatly fit into “deepfake” or “real.”
The AI-composed to reality-captured axis describes origin, not morality. This distinction is often missed. We frequently see debates collapse into whether AI-generated content is “good” or “bad,” when the more important question is whether it faithfully represents reality, respects rights, and can be verified. Two pieces of AI-composed content can sit at opposite ends of the trust spectrum: one might be a malicious deepfake designed to impersonate a public figure, while another is a clearly labeled fictional scene in an entertainment product. Similarly, two reality-captured clips can differ meaningfully in trustworthiness depending on whether their metadata and provenance are intact, whether they have been selectively edited to mislead, and who is presenting them. To make the spectrum useful, we need additional layers.
Once these layers are applied, the spectrum becomes much richer. At the far AI-composed end, we find counterfeit fabrications and unlicensed clones: the classic “deepfake” style media that is AI-composed, counterfeit in intent, often unlicensed in its source material, and unverified in provenance. Moving inward, we encounter synthetic simulations that are grounded in real data but do not correspond to specific events, or reconstructions that recreate past moments from transcripts and partial records. In the middle, we see AI-assisted hybrids: summaries, translations, stylistic rewrites, accessibility transformations, and restorative enhancements where reality-captured inputs are transformed by models but retain their underlying truth. At the far reality-captured end, we find provenance-verified, rights-cleared media and materials whose origin in the physical world can be proven and whose edits or transformations are documented.
In this framing, “deepfake” is no longer the spectrum; it is a particular corner of it. It describes content that is AI-composed, counterfeit in its claims, often unlicensed in its use of likenesses or brands, and unverified in origin. “Real” also stops being synonymous with “I saw it on video.” It becomes a stricter concept: reality-captured, verified, authentic, and appropriately licensed, often tied back to human relationships grounded in the physical world. Those human relationships-who you have worked with, who you have seen deliver in reality, who you would trust with capital or your reputation-will matter even more as the internet becomes more synthetic.
To make this framework operational, we break the spectrum into twenty reference points. These are not rigid categories, but practical markers we use to evaluate risk, opportunity, and monetization. They provide a shared language for builders, regulators, and investors navigating an AI-native internet.
For investors and operators, the value of this spectrum is that it turns a noisy conversation about “deepfakes” into a concrete underwriting tool. Different segments will monetize in different ways:
When we evaluate companies in this space, we ask:
That turns a philosophical discussion about “truth” into a concrete thesis about revenue, margins, and market structure.
Against this backdrop, Web3 becomes structurally important. Blockchains are not truth oracles; they cannot independently confirm that an event happened in the physical world. What they do provide is a shared, append-only ledger that multiple parties can write to and verify against without trusting one another. This is exactly what is needed to harden the verification, licensing, identity, and transaction layers of the spectrum.
On verification, blockchains can anchor provenance for both digital content and real-world materials. Capture devices, industrial sensors, and even supply-chain scanners can sign outputs at the point of creation and anchor hashes on-chain. Editors, platforms, and AI systems that later transform the content or the materials can add their own signatures and references, producing an evolving chain of attestations. Over time, this creates an open provenance graph linking media, datasets, and physical goods back to their origins and documenting each transformation: who touched it, what was done, and under what license. For investors, this is a substrate for entirely new trust-driven markets in both digital and physical goods.
On licensing, blockchains are the substrate for the next generation of rights. Ownership and usage rights become on-chain primitives that can be queried directly by AI agents and applications. Licenses can evolve from static legal text into composable, machine-readable contracts: specifying whether a work or dataset can be used to train models, whether derivatives are allowed, how revenue should be split between upstream and downstream contributors, and what happens when rights are revoked. As norms around “dupes,” remixes, and “inspired by” content evolve, these licensing structures can flex: distinguishing between acceptable homage and economically meaningful copying in a way that both markets and courts can understand.
Identity and reputation form the third pillar. Blockchain-validated identities can represent humans, organizations, and AI agents. Humans can anchor their professional histories, contributions, and attestations on-chain. AI agents can hold keys, sign their outputs, and transact autonomously. Over time, reputational context accrues to these identities: which licenses they respect, which transactions they honor, which claims they sign that later prove accurate or fraudulent. This becomes a behavioral layer on top of static provenance, giving platforms and counterparties a reason to trust some agents and discount others.
Finally, blockchains validate trust for transactions themselves. Monetary and contractual flows between humans, agents, and hybrid arrangements can be recorded, enforced, and settled on-chain. A human may delegate a budget and a set of constraints to an AI agent; that agent may negotiate with other agents, trigger payments, and update on-chain state as it executes. Each step leaves a cryptographic trail. In this environment, content on the spectrum of trust is not just information; it is tied directly into economic commitments. Who you choose to transact with, and on what terms, becomes a function of where they and their outputs sit on the spectrum and how they have behaved historically.
Through all of this, human relationships grounded in reality remain the anchor. No matter how sophisticated AI agents or blockchain infrastructure become, capital allocators and founders still form trust primarily through repeated interactions, execution in the real world, and shared experience. Those human relationships generate the high-value, reality-captured data and reputations that the rest of the system composes from. Web3 and AI merely make more of that trust machine-readable, composable, and economically aligned.
Not every AI plus blockchain idea will matter. Durable adoption will emerge where AI is creating real pain or risk, where blockchain-based infrastructure is uniquely suited to mitigate that risk, and where economic incentives exist for participation. Viewed through the spectrum of trust, several zones stand out.
First, content provenance rails for AI-native media will become necessary infrastructure. As AI-composed content saturates feeds, collaboration tools, and marketplaces, platforms will need reliable ways to label content along the spectrum, from fully synthetic narratives at points 1-5 to provenance-verified captures at points 18-20. Enterprises will need these rails for compliance and risk management. Investors should expect protocol and middleware businesses that serve capture devices, content tools, and AI models to emerge here, with network effects and standard-like dynamics.
Second, machine-readable licensing for AI training and generation will move from theory to necessity. Creators will increasingly insist on explicit choices between “no training,” “train but no derivatives,” “train and share revenue,” and other modes. AI developers and enterprises will look for clean, compliant datasets whose licensing status is unambiguous. Systems that encode licenses on-chain, track derivations across the spectrum, and route value accordingly will become the default for high-stakes, high-value models.
Third, identity and wallets for AI agents expose a new layer of infrastructure. As agents operate across the stack, writing code, managing content pipelines, transacting on behalf of users, their identities, reputations, and economic incentives will matter. Blockchain-validated identities and transaction histories will allow markets to discriminate between trustworthy and untrustworthy agents. This is directly tied to the spectrum: if an agent consistently signs outputs that are later validated at points 16-20, its content and transactions will command a premium over those from agents whose outputs are frequently disputed at the counterfeit end.
Fourth, data collectives and cooperatives will crystallize around reality-captured, provenance-rich datasets. Contributors of high-integrity data, sensor networks, specialized professionals, communities with unique access, will use Web3 structures to pool data, govern access, and share revenue from models that depend on that data. The better the provenance and licensing (the closer to points 18-20), the more bargaining power these collectives will have.
We are seeing these adoption zones materialize across our portfolio, demonstrating how the Spectrum of Trust moves from theory to tangible value creation. In each case, what stood out early was how clearly these teams understood where trust breaks down and where it can be rebuilt as a product, a network, or a market.
1. Authenticity vs. Counterfeiting
This layer ensures content faithfully represents its underlying reality, securing inputs and actions against fabrication or impersonation.
This layer moves rights and consent from static documents into machine-readable, on-chain permissions that are enforceable by AI agents.
This layer addresses how we prove origin and edit history with reliable, cryptographic evidence rather than relying on centralized trust.
This layer tracks and prices the nuance between works broadly inspired by influences and those explicitly remixed from identifiable originals.
The next internet will be AI-native. The question is whether it will also be trust-native. That outcome will depend less on the models themselves and more on the choices made by builders, platforms, and capital allocators designing the rails beneath them. The spectrum of trust-AI-composed to reality-captured, layered with authenticity, licensing, verification, and derivation-should act as a guiding force for how we design systems, set policy, and allocate capital in this environment.
For builders, the spectrum clarifies where to anchor products: provenance protocols for media and materials, evolving licensing systems that can be understood and enforced by AI agents, blockchain-validated identities for humans and agents, transaction rails that bind content and commitments, and data collectives that monetize reality-captured signals. For platforms and regulators, it offers a precise language to describe obligations and enforcement across the full range of content we will see, including the gray areas of “dupes,” remixes, and “inspired by” culture. For investors, it highlights where durable moats can form as AI melts the boundary between synthetic and real, and as blockchain hardens the rails for provenance, identity, and value distribution.
We are not going back to a pre-AI world where visual evidence is self-authenticating and text can be assumed to have a human author. The systems that will matter most from here are those that make visible where on the spectrum a piece of content or a transaction sits, who stands behind it, human or agent, and how value should flow as it is inspired, remixed, and recomposed across the network. That is where Web3 and AI move beyond buzzwords and compound together into the infrastructure of digital trust and long-term value creation.

