The Trump administration has announced new measures to counter what it described as “industrial-scale campaigns” by China-based groups to copy artificial intelligence developed by U.S. companies.
In a statement on Thursday, Michael J. Kratsios, assistant to the president for the White House Office of Science and Technology Policy, said the government has information suggesting that foreign entities—primarily in China—are deliberately targeting leading U.S. AI firms to distill their models.
“Models developed from surreptitious, unauthorized distillation campaigns like this do not replicate the full performance of the original,” Kratsios said. “They do, however, enable foreign actors to release products that appear to perform comparably on select benchmarks at a fraction of the cost.”
The statement follows accusations made two months earlier by Anthropic, developer of the Claude models, which alleged that three Chinese AI firms engaged in distillation attacks on its systems.
The White House also warned that models created through such practices could be cheaper but lack key safety safeguards, making them less aligned with goals of being neutral and truth-seeking.
Data from Morph LLM highlights the pricing gap among leading models: Claude Opus 4.6 costs about $5 per million tokens, while ChatGPT-5.4 Pro can reach $30. By comparison, China’s DeepSeek V3.2—considered mid-tier—is priced at just $0.26.
Tech investor Jason Calacanis said in a recent interview on the “All-In” podcast that he spends roughly $300 per day using Anthropic’s AI agents to help operate his businesses.

The White House Office of Science and Technology Policy said the Trump administration plans to work closely with the private AI sector to counter threats from foreign firms, including sharing intelligence on large-scale attacks, improving coordination, strengthening defenses, and exploring ways to “hold foreign actors accountable.”
The OSTP added that foreign companies have relied on “tens of thousands of proxy accounts” to evade detection while using jailbreaking techniques to extract proprietary information.
“These coordinated campaigns systematically extract capabilities from American AI models, exploiting American expertise and innovation,” the office said.
In a separate case, Anthropic alleged in late February that DeepSeek, Moonshot AI, and MiniMax generated more than 16 million interactions with its systems through roughly 24,000 “fraudulent accounts.”
According to Anthropic, the firms were attempting to extract capabilities from its Claude models, including agentic reasoning, coding, data analysis, rubric-based grading, and computer vision.

