Anthropic just released a 244-page system card for an AI model it has no intention of selling to the public. That alone should tell you how different this moment is from the usual AI news cycle.
The model is Claude Mythos Preview. And the reason it won't see a general release isn't that it failed to meet benchmarks — it's that it passed them too well. According to Anthropic, Mythos can autonomously discover zero-day vulnerabilities, write working exploits, and chain together multiple security flaws into sophisticated attack sequences. It found bugs in every major operating system and every major browser. In some cases, those bugs had survived decades of human security review.
Rather than sitting on this capability — or pretending it doesn't exist — Anthropic is doing something unusual: sharing the model with a consortium of tech giants and security firms under a new initiative called Project Glasswing. The name comes from a butterfly whose wings are nearly transparent, a metaphor for software vulnerabilities that are hidden in plain sight.
"The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities."— Dario Amodei, CEO, Anthropic
What Project Glasswing Actually Does
Here is the situation in plain terms: AI models have reached a capability threshold where they can outperform all but the world's most skilled human security researchers at finding and exploiting software flaws. That genie is not going back into the bottle. The question is whether the defenders or the attackers get there first.
Project Glasswing is Anthropic's answer. The 12 launch partners — including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks — will use Mythos Preview for what Anthropic calls "defensive security work." They will scan their own first-party code and open-source systems, share what they learn with the broader industry, and help Anthropic refine safeguards before any wider deployment.
Anthropic is backing the effort with up to $100 million in usage credits and an additional $4 million in direct donations to open-source security organizations. Partners pay once their credit allocation is exceeded — which signals that Anthropic expects this to be intensive work, not a PR exercise.
The Benchmarks That Changed Everything
For months, Mythos existed only as a rumor inside Anthropic. Then Fortune discovered leaked internal documents describing it as "by far the most powerful AI model" Anthropic had ever built — one whose cyber capabilities were "currently far ahead of any other AI model." Cybersecurity stocks fell on the news.
When Anthropic finally confirmed the model's existence and published benchmark results, the numbers validated the concern. On SWE-bench — a coding benchmark — Mythos scores 93.9%, compared to 80.8% for Opus 4.6. On the USAMO math competition benchmark, it hits 97.6%. On CyberGym, which specifically tests AI agents on real-world vulnerability detection and reproduction, Mythos outpaced every other model Anthropic has produced.
What makes these numbers consequential isn't the raw percentages — it's what they enable. Mythos doesn't just find single vulnerabilities. According to Anthropic's Frontier Red Team, the model can identify chains of vulnerabilities — where two or three flaws, none individually critical, can be combined into a full system compromise. As Nicholas Carlini, a researcher who worked with the model, put it: "I've found more bugs in the last couple of weeks than I found in the rest of my life combined."
How the Mythos Story Unfolded
The Business Risk You Actually Need to Worry About
If you run a business that relies on software — which is to say, every business — here is the practical reality of what Anthropic announced this week. The vulnerabilities Mythos is finding are not theoretical. A 27-year-old crash bug in OpenBSD. A Linux kernel chain that allows complete machine takeover. Flaws in every major web browser your employees use today.
These bugs exist whether or not Mythos found them. The difference is that an AI can now scan millions of lines of code for these patterns in days rather than years. Anthropic found them. Defenders now know about them and are patching them. But the same AI capabilities that let Mythos find bugs defensively will eventually be available — or are already available in less controlled forms — to attackers.
The Larger Picture
It would be easy to frame Project Glasswing as Anthropic doing the responsible thing after an embarrassing leak. And there is some truth to that reading. The model's existence was discovered because Anthropic accidentally left internal documents in a publicly accessible database — a remarkable irony for a company announcing an AI that finds other people's security lapses.
But the deeper story is about the inherent dual-use nature of advanced AI capabilities. Anthropic didn't build Mythos to be a cybersecurity tool. It's a general-purpose frontier model with strong coding and reasoning abilities. Those same abilities — when applied to software — turn out to be extraordinarily good at finding bugs. That is not a design choice. It is a consequence of building capable AI.
The same pattern will repeat across other domains. An AI trained to write code will also be able to find flaws in code. An AI trained to understand biological systems will also be able to reason about pathogens. The question every AI lab — and every government — is grappling with is how to give defensive institutions access to these capabilities before offensive actors do.
Project Glasswing is one answer. Whether it's the right one will depend on whether the 40-plus organizations now holding Mythos Preview can actually patch vulnerabilities faster than adversaries — state-sponsored or otherwise — develop equivalent capabilities. Anthropic has been briefing CISA and federal agencies. The clock is ticking on both sides.
"The age of AI-driven cybersecurity is not approaching. It arrived on April 7, 2026, with a 244-page system card and a list of 12 organizations racing to patch the bugs that an AI found in their software."— NxCode Analysis, April 8, 2026
For business leaders, the immediate implication is straightforward: the security landscape just changed structurally, not incrementally. The tools your adversaries can access — and the tools your defenders will be able to access — are operating at a qualitatively different level than 18 months ago. That requires a corresponding shift in how you think about risk, investment, and board-level oversight of your technology infrastructure.
The internet has always been built on a foundation of vulnerabilities hidden in plain sight. Project Glasswing is the first serious attempt to use AI to find and fix them at scale, before the next generation of AI is used to exploit them. Whether it succeeds is the most consequential technology question of 2026.