
China’s extraction campaign: A targeting operation, not a curiosity
Anthropic’s disclosure that three China‑based AI companies (DeepSeek, Moonshot AI, and MiniMax) ran more than 16 million interactions through roughly 24,000 fraudulent accounts is not a story about model misuse. It is a story about targeting. These campaigns went straight at Claude’s most sensitive capabilities: agentic reasoning, tool use, and coding. That is not random sampling; that is structured collection.
I’ve spent enough time in the world of targeting to recognize this pattern immediately, and you don’t need my level of experience to see it. When an adversary can observe a system at scale, they can map its strengths, seams, and predictable behaviors. China now has that behavioral telemetry for Claude, and they will use it to tune their own systems and to shape offensive operations against environments where Claude‑like models are deployed.
And Claude is not the only system in China’s targeting sights. The same actors have used similar high‑volume extraction methods against other frontier models, including Google’s Gemini and OpenAI’s ChatGPT. They generate enough interaction data to understand how these systems think and where they can be pressured.