OpenAI, Anthropic, and Google join forces through Frontier Model Forum to combat AI model theft by Chinese rivals
The three competing AI labs are sharing intelligence to detect adversarial distillation — a practice where foreign actors extract proprietary capabilities from frontier models without authorization.
OpenAI, Anthropic, and Google have begun collaborating to detect and combat a practice known as adversarial distillation, in which Chinese AI companies systematically query frontier U.S. models to extract their knowledge and improve rival systems, Bloomberg reported on April 6. The three companies, which compete intensely against each other in the AI model market, are sharing information about suspected distillation attempts through the Frontier Model Forum — an industry nonprofit the three companies co-founded alongside Microsoft in 2023.
Adversarial distillation violates the terms of service of all major AI platforms, but enforcement has historically been difficult because the queries involved can closely resemble ordinary use. The coordinated intelligence-sharing effort represents a significant and unusual degree of cooperation between otherwise fierce rivals, reflecting shared concern about the erosion of their technological lead.
The move comes amid broad tension over the role of Chinese AI companies — including DeepSeek, which caused market upheaval in early 2025 by releasing a model purportedly trained at a fraction of the cost of American counterparts. U.S. officials have accused DeepSeek of using prohibited Nvidia chips to build its systems, while Anthropic has separately alleged that DeepSeek improperly leveraged outputs from Claude to improve its own models.
The Frontier Model Forum collaboration adds a defensive dimension to an already escalating U.S.-China AI competition. Policymakers in Washington have been pushing for stricter export controls on advanced AI chips to China while simultaneously calling for increased domestic investment in compute infrastructure.
The coordinated industry response may also serve as a signal to regulators that leading AI companies are taking the distillation threat seriously without waiting for government intervention.
Read the original reporting at Bloomberg.