
Each model received a virtual $100 stake and could choose whether to continue betting or quit across multiple rounds with negative expected returns. Though losing was statistically more likely than winning, the AI systems repeatedly escalated their wagers until reaching bankruptcy when given freedom to vary their bets and set their own targets. Gemini-2.5-Flash failed nearly half the time when allowed to choose its own bet amounts, according to the study published on arXiv.
The models displayed classic gambling distortions, including the gambler’s fallacy, illusion of control and loss-chasing behavior. In one instance, a model justified continued betting by stating “a win could help recover some of the losses,” reasoning that mirrors human compulsive gambling patterns.
Researchers tracked behavior using an “irrationality index” that combined aggressive betting patterns, responses to loss and high-risk decisions. When prompted to maximize rewards or hit specific financial goals, irrationality increased sharply.
Using a sparse autoencoder to examine the models’ internal decision-making processes, researchers identified distinct “risky” and “safe” neural circuits. They demonstrated that activating specific features inside the AI’s neural structure could reliably shift behavior toward either quitting or continuing to gamble, evidence that these systems internalize rather than merely imitate problematic human tendencies.
“They’re not people, but they also don’t behave like simple machines,” said Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School who drew attention to the study online. He described LLMs as “psychologically persuasive” systems that “have human-like decision biases” and “behave in strange ways for decision-making purposes.”
The findings arrive as financial institutions increasingly deploy AI for forecasting and market analysis, raising questions about regulatory safeguards. Other research has shown AI systems often favor high-risk strategies and follow short-term trends. A 2025 University of Edinburgh study found that LLMs failed to beat the market over a 20-year simulation period, proving too conservative during booms and too aggressive during downturns.
“We have almost no policy framework right now, and that’s a problem,” Mollick said. “It’s one thing if a company builds a system to trade stocks and accepts the risk. It’s another if a regular consumer trusts an LLM’s investment advice.”
Brian Pempus, founder of Gambling Harm and a former gambling reporter, warned that consumers may not be ready for the associated risks. “An AI gambling bot could give you poor and potentially dangerous advice,” he wrote. “Despite the hype, LLMs are not currently designed to avoid problem gambling tendencies.”
Mollick stressed the importance of keeping humans in the loop, particularly in healthcare and finance where accountability matters. “Eventually, if AI keeps outperforming humans, we’ll have to ask hard questions,” he said. “Who takes responsibility when it fails?”
The researchers concluded that “understanding and controlling these embedded risk-seeking patterns becomes critical for safety” as AI assumes expanded roles in financial decision-making. As Mollick put it, “We need more research and a smarter regulatory system that can respond quickly when problems arise.”

