December 2 — Anthropic’s latest research reveals AI agents have significant on-chain attack capabilities: In simulation tests of smart contracts actually exploited between 2020 and 2025, Claude Opus 4.5, Sonnet 4.5, and GPT-5 collectively reproduced vulnerabilities worth approximately $4.6 million. When scanning 2,849 contracts with no known vulnerabilities, the two Claude models also identified 2 entirely new zero-day vulnerabilities and successfully simulated profitable exploits. The research notes AI’s on-chain attack returns have roughly doubled every 1.3 months over the past year, and the models now fully possess autonomous, profitable vulnerability exploitation capabilities.
This news is powered by Lookonchain 

