There’s a renewed sense of optimism sweeping through Silicon Valley. It’s as though the doom and gloom of years past has lifted and, instead of an “AI winter,” we’re getting human-level artificial intelligence ahead of schedule, writes Tristan Greene.

Spring is here. Despite the specter of a GPU shortage caused by Donald Trump’s tariffs looming over the immediate future, the forecast for the AI industry appears as sunny as it’s ever been. A recent Statista report predicted an annual growth rate (CAGR) of 26.60% for the years 2025-2031. 

And the world-shaking arrival of China’s DeepSeek, a world-class AI model allegedly trained for a fraction of what western models cost, only seems to have emboldened US market leaders. 

Research on advanced artificial intelligence technologies has reached an all-time-high in both resources invested and the number of people working around the globe to develop the next generation of AI models. 

And insiders from across the spectrum of the technology world appear to have formed a majority opinion; it’s no longer a question of if a human-level artificial intelligence will emerge, but when. 

Technology optimists

At the beginning of the decade, the median prediction, according to research from “AI Multiple,” was that artificial general intelligence (AGI) would arrive sometime around 2060. Then, by 2023, predictions shifted to around 2050. Today, it’s 2040. 

While there’s no legal or scientific definition for “AGI,” insiders and pundits appear to have tacitly agreed that a “general” AI would be capable of doing most tasks requiring intelligence that a typical human could do. The arrival of such a machine within the next 15 years would be incredible, but many experts are convinced that it could happen much sooner. 

Jack Clark, the cofounder of Anthropic and author of the ImportAI newsletter, recently shared his belief “that powerful AI systems are going to arrive soon – likely during this presidential administration.” 

OpenAI CEO, Sam Altman, said in January the firm was “confident” it knew how to achieve AGI and predicted it could produce agents capable of joining the workforce before the end of 2025. 

And the researchers aren’t alone. Pundits around the globe seem to be embracing the idea that machines can “think” and “reason.”

In years past, when six-month AI pauses seemed reasonable and urban legends about Facebook’s AI agents inventing their own language struck terror into the hearts of journalists, such proclamations might have drawn a modicum of rebuke. But the optimism sweeping through Silicon Valley is no longer contained to California.

Nearly 3,000 miles away, in New York, the writers and editors at the Times are deeply enamored of the machines’ progress. Despite the Times being involved in an ongoing lawsuit involving ChatGPT maker OpenAI, its coverage of generative AI models remains mostly glowing. 

Stories on AI-powered “vibecoding” and those explaining how ChatGPT and DeepSeek are capable of “reasoning” set the stage for wonder while those worrying that powerful AI is coming and we’re not ready set the stakes.

Wired’s Steven Levy, also reporting from New York, recently wrote that, if Anthropic succeeds, a nation of benevolent AI geniuses could be born.

Whether we’re excited, scared, or both, it seems we’re all expecting something huge to happen soon. 

The developers behind the world’s most popular chatbots seem to have convinced much of the world that “human-level” or “general” artificial intelligence is within reach. It’s an exciting time to be an optimist.

Technology cynics

Not everyone agrees with the most popular timeline predictions for the emergence of AGI, however. 

NYU Professor Gary Marcus, a mainstay in the AGI timeline debate scene, has consistently stated his opinion that accelerated timelines and statements indicating AGI is “imminent” are either exaggerated or unbased. In April of 2024, he tried to get Elon Musk to make a $10 million wager after the Tesla and xAI CEO boldly predicted AI would surpass human intelligence by the end of 2025.

Marcus and those who share similar opinions might see themselves as realists rather than cynics. While the progress towards AGI is impressive, it bears mentioning again, there’s no scientific definition for “human-level intelligence.” 

Benchmarks measure progress but, depending on the benchmark, that progress can be misleading. Many of the strongest AI models have learned to cheat and game benchmarks by “reward hacking” to win challenges without solving them.

Yet, when tested on benchmarks that ostensibly can’t be gamed, such as the ARC-AGI-2 developed by Francois Chollet’s ARC-Prize, these same models struggle to achieve scores as high as 5% against an average human score of 60%

A different kind of optimist

The divide between the voices of doubt and those who choose to believe could be explained in any number of ways. Perhaps the world is weary from war, political unrest, and global economic uncertainty. Maybe nobody really wants to talk about killer robots or the potential for yet another technology bubble to crash the markets. 

Chatbots make us feel good. They’re friendly, often helpful, and always there for you. 

And the idea that we might all be a part of what would certainly be the greatest scientific achievement since the Manhattan Project is an appealing alternative to more doom and gloom. 

Also, unlike the development of the atomic bomb, we all get to play with AI models as they become more capable. 

According to analysts, more people have engaged with generative AI systems such as ChatGPT, Claude, and Gemini in the past year than Netflix or X.com. 

That doesn’t mean everyone who uses a chatbot is an AGI optimist, however. Many people are optimistic about the technology itself but don’t necessarily believe it’ll reach “human level intelligence” within a matter of months. 

Determining the truth

We spoke with professor Teppo Felin of Utah State University, and founding director of USU’s Institute for Interdisciplinary Study, to discuss his thoughts on the semantics of AGI and the current timelines for its emergence.

He describes himself as someone who is very optimistic about the potential for AI technology to uplift humanity. But he remains unconvinced that “human-level AI” is imminent.

“I use these tools everyday, all the time,” said Felin, adding that, “I don’t know that there is intelligence or reasoning taking place. I think it mimics reasoning that it’s seen in the past.”

When asked if today’s models were strong enough to support the notion that human-level AI was just around the corner, he told us that “fundamentally, the architecture is what it is.” He seemed unconvinced that current techniques would scale to AGI.

However, instead of trying to find correlations between human and machine intelligence, Felin said he was “happy to embrace multiple forms of intelligence.” He sees AI models as tools. 

Felin’s current work explores whether prediction (the “autocomplete” method that chatbots use to generate text) can ultimately lead to “decision making.” He and research partner Mari Sako recently published an article arguing that it takes more than data and rewards to make decisions like a human.

Per their article:

“Humans can engage in counter-to-data reasoning about the plausibility of outcomes that presently lack data. It is this causal reasoning that enables humans to intervene in their surroundings and to experimentally generate new data. As a result, under certain circumstances, human decision making involves a very different set of steps from data-driven prediction.”

This distinction is timely, given that words such as “thought,” “reason,” and “intelligence” were once considered plebian when describing what computer scientists would have deemed “machine learning” prediction algorithms just a decade ago. 

Yet, even as some researchers have begun walking back their claims regarding whether these machines are performing higher intelligence functions, others have completely embraced the formerly taboo terminology to describe even the most modestly “intelligent” models. 

The question that remains is one of which camp is correct. Is AGI imminent? Unfortunately there’s no real answer because there’s no apparent threshold for AGI. 

We don’t have a number we can count up to or a special “Turing Test” we can implement to decare the AGI era upon us. Instead, we’ll all have to rely on our feelings. And, for many, whether we’re cynics, optimists, or something else entirely, the reality of the situation seems obvious. 

But, as Felin wrote in a 2018 article titled The Fallacy of Obviousness, “what people are looking for — rather than what people are looking at — determines what is obvious.” 

See also: The Big Hallucination

The link has been copied!