Anthropic withholds its powerful Mythos AI model from the public, giving select giants access to stay ahead in the cybersecurity arms race.
Background: Anthropic is one of the world's biggest AI companies, the one behind the Claude models (think ChatGPT's quieter but more powerful cousin). Over the past few months, Claude has been everywhere, with new launches like Claude Cowork, Claude Skills, and updated coding tools.
What happened: Anthropic has built a new AI model called Mythos, but it's not releasing it to the public... Why? Well, it's very good at hacking. It's already uncovered thousands of software vulnerabilities, including some that went undetected for up to 27 years.
What else: So, instead of a launching Mythos like a normal AI product, Anthropic is giving access to a select group of heavy hitters (Amazon, Apple, Microsoft, Google, and JPMorgan) to stay ahead of bad actors before these tools are misused.
What's the key learning
💡 AI isn't just helping companies build better products, it's also exposing risks that humans have missed for decades. Mythos identified a bug in software that had already been tested over 5 million times, and no human had ever caught it.
💡In the past, companies had more time to respond to cybersecurity threats, but AI is rapidly shrinking that window. So, AI companies like Anthropic are giving big players access first so they can stay ahead.
💡 Banks and major systems rely heavily on cloud providers:
So clearly, stakes are massive, especially for critical infrastructure.
Sign up for Flux and join 100,000 members of the Flux family