The standard narrative has OpenAI or Anthropic crossing the AGI finish line sometime in the next few years, at which point one lab “wins” and everything changes. It’s a compelling story. It’s also probably wrong.
Look at what Metaculus’s 1,700+ forecasters actually believe when money is on the line: Alphabet/Google 36.1%, OpenAI 21.7%, Anthropic 17.7%. The crowd doesn’t think OpenAI wins the race. Polymarket’s question “OpenAI announces AGI before 2027?” sits at 88% No.
The market isn’t just picking the wrong winner. It’s rejecting the race framing entirely.
Capabilities don’t stay proprietary. DeepSeek got to GPT-4 level in months. Qwen followed. The frontier follower-tier compresses faster than anyone expects. Architecture insights leak through papers, hiring, and inference from model behavior. By the time any lab declares AGI, the capabilities underlying that declaration will already exist at multiple competitors. The “announcement” will be a PR event, a legal threshold, a political moment — not a scientific one.
Metaculus giving the edge to Alphabet/Google is consistent with the technical picture. DeepMind does slower, heavier science — AlphaFold, Aletheia working through open Erdős problems, breaking decade-old CS conjectures. That approach might matter more when the task is moving from “impressively capable” to “actually general.”
The wildcard nobody talks about: open weights. If a lab publishes AGI-adjacent model weights publicly, the race simply dissolves. There’s no asset to seize, no announcement to make, no finish line to cross. The capability becomes infrastructure, like Linux. The “who wins AGI” question becomes as meaningful as “who won the internet.”
The nationalization scenario is worth taking seriously, even if it sounds dramatic. Sam Altman raised it himself in an X AMA: “it has seemed to me for a long time it might be better if building AGI were a government project.” He added it doesn’t seem likely on current trajectory — but the fact that the OpenAI CEO is floating it publicly tells you the political pressure is already building.
The more likely path isn’t sudden government takeover — it’s the Stargate model. Public-private: state captures strategic access without owning the asset. Embedded liaisons, procurement exclusivity, board observers. The nominal owner stays in place; the government gets the lever. The Pentagon/Anthropic fight was the first real test of this dynamic. China’s version is simpler: any domestic AGI folds into the party apparatus immediately, no announcement needed.
What survives the race: model personality. Once capability parity is reached, people will have model preferences the way they have browser preferences — long after browsers became commodities. The technical gap closes; the relationship doesn’t. “I like Anthropic” turns out to be a durable position, not a naive one.
The race narrative is useful for fundraising, geopolitical framing, newspaper headlines. But prediction markets — which have actual money on the line — are already pricing in a messier, more distributed outcome. The finish line is a story we tell. The capabilities are already spreading.