The Careful One Wins
For years, the story was simple: OpenAI builds the frontier, Anthropic builds the guardrails. OpenAI makes the exciting thing. Anthropic makes sure the exciting thing doesn’t kill you. Everyone understood the roles. They were clean, legible, and wrong.
Here’s what actually happened. Anthropic’s annualized revenue went from $9 billion at the end of 2025 to $30 billion by the end of March 2026. [1] On corporate expense data from Ramp, they captured 37% of business AI spending in Q1 — versus OpenAI’s 33%. [2] The first time OpenAI has been overtaken on that measure. Jai Das of Sapphire Ventures called OpenAI “the Netscape of AI,” [3] which is the kind of thing that sounds like hyperbole until you look at the numbers.
Meanwhile, Anthropic launched Claude Design, a full creative suite that cratered Figma and Adobe stock on announcement. [4] And they’re sitting on Claude Mythos, an offensive cybersecurity model they’re only sharing with a narrow circle of partners — and, reportedly, negotiating with the White House over. [5]
The company that built its brand on caution now has a capability posture far more aggressive than its old safety-first branding would suggest.
The Inversion Nobody Predicted
There’s a version of this story that’s just “Anthropic got good.” But that misses the structural thing that happened. The assumption was always that safety and capability were in tension — that being careful meant being slower, that guardrails meant holding back. Anthropic’s entire brand was built on being the one willing to sacrifice capability for responsibility.
What the data shows is the opposite. Safety and capability turned out to be correlated, not opposed.
Claude Code is increasingly the coding assistant enterprises seem to trust most. Not despite Anthropic’s safety focus but because of it. Reliability and controllability aren’t just ethical positions — they’re features that enterprise buyers actually pay for. When a CTO is deciding which AI to put inside their codebase, “it works consistently and doesn’t go off the rails” isn’t a nice-to-have. It’s the entire decision.
OpenAI optimized for spectacle. GPT-5 demos. Stargate announcements. Massive fundraises with breathless coverage. Anthropic optimized for the boring thing: making software that works.
The market is now telling us which one matters more.
The Mythos Problem
But there’s a tension in this story, and it’s worth being honest about it.
Claude Mythos is an offensive cybersecurity model. It can find vulnerabilities, plan intrusions, do the kind of work that makes security teams nervous. Anthropic built it, and they’re not giving it to everyone. They’re sharing it selectively — with cybersecurity partners, and in conversations that reportedly involve the White House and the Pentagon.
This is the most honest version of the AI safety conversation we’ve had. Anthropic isn’t pretending the capability doesn’t exist. They’re not saying “we could build this but we won’t.” They built it. They know it’s dangerous. And they’re using the question of who gets access as leverage.
Can you be “the responsible AI company” while building the most dangerous model and negotiating its distribution as geopolitical power? Maybe. But it’s worth naming what that actually is: it’s not caution. It’s controlled aggression. It’s the difference between not having a weapon and having one you choose when to show.
This is what responsible power looks like — or at least what it looks like when responsibility and capability converge in the same hands. Whether that’s comforting or alarming probably depends on how much you trust the hands.
Why the Narrative Lagged
The most interesting part of this isn’t that Anthropic overtook OpenAI. It’s how long the old story persisted after the underlying reality shifted.
“OpenAI is the leader” was treated as a fact long after the data stopped supporting it. On the secondary market, OpenAI shares have fallen out of favor — in some cases becoming hard to unload — while Anthropic draws near-insatiable demand. [6] Anthropic’s revenue trajectory is steeper. Their enterprise penetration is deeper. Their capability stack is broader. And yet — until very recently — the default assumption was still that OpenAI was the frontier and Anthropic was the safety company.
This is how narratives work. They lag data. They persist because they’re legible, because they’re simple, because the people repeating them haven’t looked at the actual numbers. “OpenAI leads, Anthropic follows” was a clean story. The real story is messier, more interesting, and harder to fit in a headline.
Which is, of course, the point.
What This Actually Means
The Anthropic inversion isn’t just a market story. It’s a proof of concept.
It shows that the thing everyone said couldn’t work — building capability on a foundation of responsibility rather than despite it — not only works but wins. Not in some abstract ethical sense. In revenue. In market share. In enterprise adoption. In the only language the industry actually listens to.
It also shows that the line between “responsible” and “powerful” was always artificial. Anthropic didn’t stop being careful. They got more capable. The care didn’t constrain the growth — it structured it. And structure turns out to be what the market was missing.
The careful one didn’t win by being careful. The careful one won because careful was the strategy all along, and everyone else was too busy building demos to notice.
Sources
[1] Bloomberg / Yahoo Finance; also TechCrunch
[2] Sherwood News, citing Ramp corporate expense data
[3] Financial Times, reported via TechCrunch and The Next Web
[4] Barron’s; Figma dropped 4.26% on the day of launch
[5] Claude Mythos system card published by Anthropic; White House meeting covered by CNBC, Reuters, and BBC