MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Opinion: The Ontology Problem Wall Street Won’t Discuss
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$67,749.00-1.71%
  • ethereumEthereum(ETH)$1,967.99-2.46%
  • tetherTether(USDT)$1.000.00%
  • rippleXRP(XRP)$1.38-1.97%
  • binancecoinBNB(BNB)$611.80-1.41%
  • usd-coinUSDC(USDC)$1.000.00%
  • solanaSolana(SOL)$80.30-3.65%
  • tronTRON(TRX)$0.2787700.36%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.051.21%
  • dogecoinDogecoin(DOGE)$0.091702-1.22%
Trading Strategies

Opinion: The Ontology Problem Wall Street Won’t Discuss

Last updated: February 11, 2026 9:00 pm
Published: 11 hours ago
Share

I study how financial institutions govern artificial intelligence. I’ve spent years examining the frameworks, analysing the data, reviewing the models. And I’ve arrived at a conclusion that I cannot shake: We are approaching a moment when financial crime will become definitionally incoherent.

Not harder to detect. Not more sophisticated to prosecute. Impossible to define in the first place.

This isn’t hyperbole. It’s the logical terminus of forces already in motion. And almost no one in a position of authority is willing to say it plainly.

Every system of financial crime enforcement rests on a single assumption: that legitimate activity and illegitimate activity are different things.

Different in ways we can specify. Different in ways we can detect. Different in ways that, ultimately, a judge or jury can evaluate. The entire apparatus of compliance, investigation, and prosecution exists because we believe that fraud looks different from non-fraud, that money laundering looks different from ordinary movement of funds, that manipulation looks different from trading.

This assumption is so foundational that we rarely examine it. We debate how to catch criminals, not whether the category of criminal will remain stable. We argue about detection methods, not about whether detection is philosophically possible.

Fraud detection works by establishing what normal looks like, then flagging what doesn’t fit.

Normal transaction volumes. Normal timing patterns. Normal geographic distributions. Normal relationships between accounts. You build a statistical portrait of legitimate behavior, and you watch for deviations. The deviation is your signal. The signal is your case.

This approach has worked for decades because human behavior has structure. People wake at certain hours, spend in certain patterns, move money for certain reasons. Even sophisticated criminals, trying to disguise their activity, leave traces. They’re human. They have habits. They make mistakes. The baseline catches them.

Generative AI doesn’t have habits. It doesn’t make mistakes. It produces outputs optimised against whatever objective function it’s given. And increasingly, that objective function is: look normal.

The most advanced fraud detection models are neural networks trained on massive datasets of legitimate activity. They learn what normal looks like, in extraordinary detail, and they flag what doesn’t match.

Now consider the adversary. A generative AI trained on the same data, or data like it, learning the same patterns, producing synthetic transactions that are statistically indistinguishable from the real thing. Same distributions. Same temporal signatures. Same relational structures.

The fraud doesn’t deviate from the baseline. It is the baseline, regenerated.

How do you detect a fake that is, mathematically, more authentic than the original?

Consider identity itself. Synthetic identity fraud is now the fastest-growing financial crime in the United States. These aren’t stolen identities. They’re constructed ones. A real Social Security number, often belonging to a child or deceased person, combined with fabricated personal details. A name that was never given to anyone. An address history that maps to real locations but no real resident. An employment record that checks out because it was built to check out.

The numbers are staggering. TransUnion reports that synthetic fraud attempts grew 184% between 2019 and 2023. In just six months, from late 2023 to early 2024, incidents surged another 153%. By late 2024, U.S. lenders faced $3.3 billion in exposure to synthetic identities tied to new account openings alone. The Deloitte Center for Financial Services projects losses could reach $23 billion annually by 2030.

But here’s what most analyses miss: synthetic identities often perform better than real ones.

A real person has inconsistencies. Gaps in credit history from a period of unemployment. An address that doesn’t match because they forgot to update it. A name spelled differently across different documents. Real lives are messy. Real data reflects that mess.

A synthetic identity has no mess. It’s engineered for coherence. Every data point aligns. Every history is complete. By the metrics financial institutions use to assess legitimacy, the fake is more legitimate than the real.

Sumsub’s research found synthetic identity document fraud surged over 300% year-over-year in Q1 2025, with North America experiencing a 311% spike. We’re not dealing with occasional counterfeits slipping through. We’re witnessing the industrialisation of fabricated personhood.

At what penetration rate does synthetic identity stop being a fraud problem and start being an ontological one? When 5% of identities in the system are synthetic? Ten percent? Twenty?

At some threshold, we stop having a financial system with fraud in it. We have a financial system where the distinction between real and synthetic has lost operational meaning.

We’re closer to that threshold than anyone wants to admit.

Fraud requires intent. This is black-letter law, foundational to every prosecution. You must intend to deceive. You must know that what you’re doing is wrong. The mental state matters as much as the act.

Now ask yourself: what is the intent of an AI system?

The question sounds philosophical. It’s actually quite practical. Because AI systems are increasingly generating the transactions, the identities, and the behavioral patterns that flow through financial infrastructure. Not as tools wielded by humans with criminal intent. As autonomous actors pursuing optimisation targets.

If a human programs an AI to generate fraudulent transactions, the human has intent. Simple enough. But AI systems today operate with substantial autonomy. They adapt. They iterate. They produce outputs that their designers did not anticipate and could not have predicted.

The gap between what was designed and what emerged is not a bug. It’s how these systems work.

So when an AI system, operating autonomously, generates activity that meets the statutory definition of fraud, but no human directed it to do so, where is the crime? You have the act. You have the harm. You don’t have the mind. Our legal system has no vocabulary for this. We prosecute people. We sometimes prosecute corporations, as legal fictions representing collective human action. We have no mechanism for prosecuting emergent behavior that arose from optimisation pressure rather than human decision.

Even if we could define AI-generated crime, we couldn’t keep up with it.

Financial crimes investigators operate on human timescales. They receive alerts. They review transactions. They build cases. A sophisticated investigation might take months. A prosecution might take years.

Generative AI operates on computational timescales. It can produce millions of transaction variations in the time it takes an analyst to review a single alert. It can generate and test thousands of evasion strategies while an investigator writes one report.

The Deloitte Center for Financial Services predicts generative AI could enable fraud losses to reach $40 billion in the United States by 2027, up from $12.3 billion in 2023, a compound annual growth rate of 32%. Financial services firms spent $35 billion on AI in 2023, with investments projected to reach $97 billion by 2027, much of it directed at a threat that evolves faster than defenses can adapt.

This isn’t just a resource problem, a matter of hiring more analysts or building faster systems. It’s a categorical problem. The adversary can mutate faster than the categories used to define it can be updated.

I call this velocity-induced definitional collapse. By the time you’ve characterised a new fraud pattern, defined it in policy, built detection rules, and deployed them, the pattern has evolved into something your definition doesn’t cover. You’re not chasing a criminal. You’re chasing a distribution that never stops moving.

Consider what compliance teams actually face. They identify a suspicious pattern. It takes weeks to document it, obtain legal sign-off, and update monitoring rules. By the time detection is deployed, the underlying AI has continued training, drifting into new statistical territory the old rules don’t cover. They’re writing rules for something that no longer exists.

That’s not an enforcement gap. That’s definitional obsolescence as a permanent condition.

Financial regulation has always had gray zones. Activity that might be illegal, depending on interpretation. Tax optimisation that might be evasion. Trading strategies that might be manipulation. Complex structures that might be fraud.

These gray zones exist because financial activity is complex and legal categories are blunt. Reasonable people disagree about where lines should be drawn. The disagreement is a feature, not a bug. It’s how a system accommodates innovation while maintaining boundaries.

Generative AI doesn’t just exploit gray zones. It manufactures them at industrial scale.

When AI produces novel financial activity, activity that doesn’t resemble anything regulators have seen before, the question “is this illegal?” often has no answer. Not because the law is unclear, but because the activity doesn’t map onto existing legal categories. It’s not fraud in a way we’ve defined fraud. It’s not money laundering in a way we’ve defined money laundering. It’s something new, generated by optimisation processes pursuing objectives that may have nothing to do with evading the law.

The activity might be harmful. It might undermine market integrity. It might enable exploitation. But calling it crime requires a definition of crime, and definitions require categories, and categories require that the thing being categorised holds still long enough to be characterised.

What happens to a financial system that cannot define crime?

I see three scenarios, none of them good.

The first is enforcement paralysis. Regulators, overwhelmed by the volume and velocity of ambiguous activity, retreat to obvious cases. They prosecute what they can clearly define and ignore the growing ocean of activity they cannot characterise. Financial crime doesn’t disappear. It becomes ambient, a permanent background condition that everyone knows exists but no one can quantify or address.

The second is definitional overreach. Unable to specify what’s illegal, authorities define it broadly. Any AI-generated activity becomes suspect. Any synthetic pattern becomes presumptively criminal. The system sacrifices precision for coverage, criminalizing vast swaths of legitimate innovation to catch an undefined set of bad actors. The cure is worse than the disease.

The third, and most likely, is incoherence. A patchwork of inconsistent standards, arbitrary enforcement, and periodic crises. Some AI-generated activity is prosecuted, some isn’t, with no principled basis for the distinction. Legitimacy drains from the system. Market participants lose faith that the rules mean anything. The infrastructure of trust erodes.

None of these scenarios is hypothetical. Elements of all three are already visible in how regulators are responding to AI-driven financial activity. The question is which tendency dominates.

I don’t have a complete answer. Nobody does. But I know what the answer requires.

It requires abandoning the premise that crime is a stable category waiting to be detected. In an environment of generative AI, crime is a moving target that must be continuously redefined. Static definitions encoded in law and regulation will always lag behind the systems they’re trying to govern.

It requires shifting from intent-based to harm-based frameworks. If we cannot reliably establish mental states, we must focus on outcomes. What damage was done? What markets were distorted? What trust was violated? The mind of the machine is inaccessible. Its effects are not.

It requires building governance systems that operate at machine speed. Not human investigators reviewing AI-generated alerts, but AI systems governing AI systems, with human oversight focused on objectives and constraints rather than individual decisions. This is uncomfortable. It means trusting machines to police machines. But the alternative is pretending that human-speed governance can manage machine-speed activity. It can’t.

Most fundamentally, it requires honesty about what we’re facing. The comfortable assumption that crime is definable and detectable is failing. Pretending otherwise doesn’t preserve the assumption. It just guarantees that we’ll be unprepared when the failure becomes undeniable.

We are standing at the edge of something the financial system has never faced: activity that is potentially harmful, possibly illegitimate, and definitionally uncategorisable.

The transactions are synthetic. The identities are synthetic. The patterns are synthetic. The intent, if it exists at all, is distributed across optimisation processes that no human designed or directed. The speed exceeds human cognition. The novelty exceeds existing categories.

Financial crime will not become impossible to punish. It will become impossible to name.

And a system that cannot name its crimes cannot govern itself.

The question is not whether the current definitions of financial crime will hold. They won’t. The question is what we build to replace them, and whether we start building before the collapse or after.

I’ve spent years studying how institutions try to govern artificial intelligence. The central lesson is this: governance frameworks that assume stability fail when confronted with systems that are inherently unstable. You cannot write rules for a phenomenon that is constantly rewriting itself.

The financial system is about to learn this lesson. The $23 billion in projected synthetic identity losses by 2030 is not the problem. It’s a symptom. The problem is that we’re counting losses in a category that is losing coherence.

The question is whether we adapt our definitions before reality forces us to, or whether we keep pretending the old categories still work until something breaks badly enough that we can’t pretend anymore.

I know which option I’d prefer. I also know which one we’re likely to choose.

Read more on News18

This news is powered by News18 News18

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

Algo Trading Space Launches Free Algorithmic Trading Course for Beginners
Form N-CSRS OBERWEIS FUNDS For: Jun 30
Best Instant Crypto Exchanges 2026: Technology & UX Comparison
Jason Keogh Joins Sage Capital Management as Sales Director – FinanceFeeds
$FGM | ($FGM) Investment Report (FGM)

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Level2 and Public Empower Retail Investors to Turn Ideas into Automated Strategies Without Coding
Next Article Nine Inch Nails Are Back: Tour Buzz, Setlists, Theories
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d