“We are at the beginning of a new epoch,” Eric Schmidt declared at the RAISE Summit in Paris on 9 July 2025. The former Google CEO’s message grounded in what he calls the San Francisco Consensus carries unusual weight—not necessarily because of his past role leading one of tech’s giants, but because of his current one: advising heads of state and industry on artificial intelligence.
“When I talk to governments, what I tell them is, one, ChatGPT is great, but that was two years ago. Everything’s changed again. You’re not prepared for it. And two, you better get organized around it—the good and the bad.”
At the Paris summit, he shared what he calls the “San Francisco Consensus”—a convergence of belief among Silicon Valley’s leaders that within three to six years AI will fundamentally transform every aspect of human activity.
Whether one views this timeline as realistic or delusional matters less than the fact that the people building AI systems—and investing hundreds of billions in infrastructure—believe it. Their conviction alone makes the Consensus a force shaping our immediate future.
“There is a group of people that I work with. They are all in San Francisco, and they have all basically convinced themselves that in the next two to six years—the average is three years—the entire world will change,” Schmidt explained. (In the past, he initially referred to the Consensus as a kind of inside joke.)
He carefully framed this as a consensus rather than fact: “I call it a consensus because it’s true that we agree… but it’s not necessarily true that the consensus is true.”
Schmidt’s own position became clear as he compared the arrival of artificial general intelligence (“AGI”) to the Enlightenment itself. “During the Enlightenment, we as humans learned from going from direct faith in God to using our reasoning skills. So now we have the arrival of a new non-human intelligence, which is likely to have better reasoning skills than humans can have.”
The three pillars of the San Francisco Consensus
The Consensus rests on three converging technological revolutions:
1. The language revolution
Large language models like ChatGPT captured public attention by demonstrating AI’s ability to understand and generate human language. But Schmidt emphasized these are already outdated. The real transformation lies in language becoming a universal interface for AI systems—enabling them to process instructions, maintain context, and coordinate complex tasks through natural language.
2. The agentic revolution
“The agentic revolution can be understood as language in, memory in, language out,” Schmidt explained. These are AI systems that can pursue goals, maintain state across interactions, and take actions in the world.
His deliberately mundane example illustrated the profound implications: “I have a house in California, I want to build another one. I have an agent that finds the lot, I have another agent that works on what the rules are, another agent that works on designing the building, selects the contractor, and at least in America, you have an agent that then sues the contractor when the house doesn’t work.”
The punchline: “I just gave you a workflow example that’s true of every business, every government, and every group human activity.”
3. The reasoning revolution
Most significant is the emergence of AI systems that can engage in complex reasoning through what experts call “inference”—the process of drawing conclusions from data—enhanced by “reinforcement learning,” where systems improve by learning from outcomes.
“Take a look at o3 from ChatGPT,” Schmidt urged. “Watch it go forward and backward, forward and backward in its reasoning, and it will blow your mind away.” These systems use vastly more computational power than traditional searches—”many, many thousands of times more electricity, queries, and so forth”—to work through problems step by step.
The results are striking. Google’s math model, says Schmidt, now performs “at the 90 percentile of math graduate students.” Similar breakthroughs are occurring across disciplines.
What is the timeline of the San Francisco Consensus?
The Consensus timeline seems breathtaking: three years on average, six in Schmidt’s more conservative estimate. But the direction matters more than the precise date.
“Recursive self-improvement” represents the critical threshold—when AI systems begin improving themselves. “The system begins to learn on itself where it goes forward at a rate that is impossible for us to understand.”
After AGI comes superintelligence, which Schmidt defines with precision: “It can prove something that we know to be true, but we cannot understand the proof. We humans, no human can understand it. Even all of us together cannot understand it, but we know it’s true.”
His timeline? “I think this will occur within a decade.”
The infrastructure gamble
The Consensus drives unprecedented infrastructure investment. Schmidt addressed this directly when asked about whether massive AI capital expenditures represent a bubble:
“If you ask most of the executives in the industry, they will say the following. They’ll say that we’re in a period of overbuilding. They’ll say that there will be overcapacity in two or three years. And when you ask them, they’ll say, but I’ll be fine and the other guys are going to lose all their money. So that’s a classic bubble, right?”
But Schmidt sees a different logic at work: “I’ve never seen a situation where hardware capacity was not taken up by software.” His point: throughout tech history, new computational capacity enables new applications that consume it. Today’s seemingly excessive AI infrastructure will likely be absorbed by tomorrow’s AI applications, especially if reasoning-based AI systems require “many, many thousands of times more” computational power than current models.
The network effect trap
Schmidt’s warnings about international competition reveal why AI development resembles a “network effect business”—where the value increases exponentially with scale and market dominance becomes self-reinforcing. In AI, this manifests through:
- More data improving models;
- Better models attracting more users;
- More users generating more data; and
- Greater resources enabling faster improvement.
“What happens when you’ve got two countries where one is ahead of the other?” Schmidt asked. “In a network effect business, this is likely to produce slopes of gains at this level,” he said, gesturing sharply upward. “The opponent may realize that once you get there, they’ll never catch up.”
This creates what he calls a “race condition of preemption”—a term from computer science describing a situation where the outcome depends critically on the sequence of events. In geopolitics, it means countries might take aggressive action to prevent rivals from achieving irreversible AI advantage.
The scale-free domains
Schmidt believes that some fields will transform faster due to their “scale-free” nature—domains where AI can generate unlimited training data without human input. Exhibit A: mathematics. “Mathematicians with whiteboards or chalkboards just make stuff up all day. And they do it over and over again.”
Software development faces similar disruption. When Schmidt asked a programmer what language they code in, the response—”Why does it matter?”—captured how AI makes specific technical skills increasingly irrelevant.
Critical perspectives on the San Francisco Consensus
The San Francisco Consensus could be wrong. Silicon Valley has predicted imminent breakthroughs in artificial intelligence before—for decades, in fact. Today’s optimism might reflect the echo chamber of Sand Hill Road. Fundamental challenges remain: reliability, alignment, the leap from pattern matching to genuine reasoning.
But here is what matters: the people building AI systems believe their own timeline. This belief, held by those controlling hundreds of billions in capital and the world’s top technical talent, becomes self-fulfilling. Investment flows, talent migrates, governments scramble to respond.
Schmidt speaks to heads of state because he understands this dynamic. The consensus shapes reality through sheer force of capital and conviction. Even if wrong about timing, it is setting the direction. The infrastructure being built, the talent being recruited, the systems being designed—all point toward the same destination.
The imperative of speed
Schmidt’s message to leaders carried the urgency of hard-won experience: “If you’re going to [invest], do it now and move very, very fast. This market has so many players. There’s so much money at stake that you will be bypassed if you spend too much time worrying about anything other than building incredible products.”
His confession about Google drove the point home: “Every mistake I made was fundamentally one of time… We didn’t move fast enough.”
This was not generic startup advice but specific warning about exponential technologies. In AI development, being six months late might mean being forever behind. The network effects Schmidt described—where leaders accumulate insurmountable advantages—are already visible in the concentration of AI capabilities among a handful of companies.
For governments crafting AI policy, businesses planning strategy, or education institutions charting their futures, the timeline debate misses the point. Whether recursive self-improvement arrives in three years or six, the time to act is now. The changes ahead—in labor markets, in global power dynamics, in the very nature of intelligence—demand immediate attention.
Schmidt’s warning to world leaders was not about a specific date but about a mindset: those still debating whether AI represents fundamental change have already lost the race.
Photo credit: Paris RAISE Summit (8-9 July 2025) © Sébastien Delarque