OpenAI, Google and Anthropic join forces to combat AI model copying, as distillation concerns spark both commercial and security fears.
Background: AI chatbots are officially everywhere. Between ChatGPT, Claude, Gemini, and Copilot, we're talking over 1 billion monthly active users. Back in 2023, OpenAI, Anthropic, Google and Microsoft came together to form the Frontier Model Forum. But instead of holding hands and singing kumbaya, they mostly just competed with each other.
What happened: Now, OpenAI, Google and Anthropic are teaming up to crack down on competitors - particularly in China. This comes after OpenAI accused Chinese firm DeepSeek of "free-riding" on its technology... and they want it shut down ASAP.
What else: But this isn't just about lost revenue... it's also being framed as a potential national security issue in US. So now, even fierce rivals are collaborating, sharing data and coordinating efforts to stop model copying and distillation.
What's the key learning
💡 AI distillation is a technique where a smaller or newer model (called the "student") learns from a larger, more advanced model (called the "teacher"). Instead of spending billions building from scratch, developers can replicate capabilities by learning from existing models.
💡 It's often used internally to create faster, cheaper versions of models... but it can become controversial when third parties scrape outputs without permission.
💡By repeatedly querying models and reverse-engineering outputs, rivals can build similar systems at a fraction of the cost—costing billions and pushing companies (and governments) to respond together. So clearly, big competitors will collaborate when the stakes are high.
Sign up for Flux and join 100,000 members of the Flux family