By now, many of us are probably familiar with the artificial intelligence hype: AI will eliminate artists! AI will do lab experiments! AI will end sadness!
Even by these standards, the latest declaration by OpenAI CEO Sam Altman, published on his website this week, seems like a gross exaggeration. Altman proclaims that we are on the brink of an “age of intelligence,” with “superintelligence” possibly “just a few thousand days” away. This new era will bring about “amazing victories,” including “fixing the climate, building space colonies, discovering all manner of physics.”
Altman and his company are seeking to raise billions of dollars from investors and are proposing an unprecedentedly gigantic data center to the U.S. government, all while laying off key staff and abandoning their nonprofit roots to give Altman partial ownership, and profiting greatly from the hype.
But even setting these motivations aside, it’s worth looking at some of the assumptions behind Altman’s prediction, which, upon closer inspection, reveal a lot about the worldview of AI’s biggest proponents — and the blind spots in their thinking.
A steam engine for thought?
Altman bases his stunning prediction on two paragraphs about human history.
Human capabilities have improved dramatically over time, and we are already able to accomplish things our predecessors thought impossible.
This is a story of unrelenting progress, driven by human intelligence and moving in a single direction. Altman makes clear that the cumulative discoveries and inventions in science and technology have led us to the computer chip and, arguably, to the artificial intelligence that will carry us into the future. This view draws heavily on the futuristic vision of the Singularitarian movement.
Such a story is seductively simple: if human intelligence has taken us to greater heights than we’ve ever known before, it’s hard not to conclude that better, faster artificial intelligence will take us even further and greater.
It’s an age-old dream. In the 1820s, Charles Babbage saw how the steam engine revolutionized human physical labor during the British Industrial Revolution, and began to dream up a similar machine that would automate mental labor. Though Babbage’s “Analytical Engine” was never built, the idea that humanity’s ultimate achievement would be to mechanize thinking itself lives on.
According to Altman, we’ve now (almost) reached the summit.
Deep learning worked, but what for?
The reason we’re so close to that bright future is simple: “Deep learning works,” Altman says.
Deep learning is a type of machine learning that involves artificial neural networks, loosely inspired by biological nervous systems. Deep learning has been incredibly successful in several areas. It is behind models that have proven adept at stringing words together in a more or less coherent way, generating beautiful images and videos, and even helping to solve some scientific problems.
Therefore, the contributions of deep learning are not trivial: they have the potential to have a major impact on society and the economy (both positive and negative).
But deep learning only “works” on a limited set of problems, and Altman knows this.
Humanity has discovered an algorithm that can truly learn any data distribution (or the underlying “rules” that produce any data distribution).
That’s the way deep learning is, and how it “works.” It’s an important, and versatile technology, but it’s far from the only problem that exists.
Not all problems can be reduced to pattern matching, not all problems provide the vast amounts of data required for deep learning to work, and human intelligence does not work this way.
A big hammer looking for a nail
What’s interesting here is the fact that Altman believes that “data-driven rules” can go a long way toward solving all of humanity’s problems.
There’s a saying that if you have a hammer, you see everything as a nail, and Altman now has a big, and very expensive, hammer.
Deep learning may be “working,” but only because Altman and others are beginning to rethink (and build) a world populated by the distribution of data. The danger is that AI is beginning to limit, rather than expand, the kinds of problem-solving we do.
What Altman’s celebration of AI largely misses is the growing resources required to make deep learning “work.” While we acknowledge the great advances and impressive accomplishments in modern medicine, transportation, and communications (to name just a few), we cannot pretend that these don’t come at a significant cost.
These have come at a cost both to some human beings (for whom the interests of the Global North mean diminishing returns) and to the animals, plants and ecosystems that have been ruthlessly exploited and destroyed by the extractive forces of capitalism and technology.
Altman and his supporters may dismiss these comments as nitpicking, but the issue of cost gets to the heart of predictions and concerns about the future of AI.
Altman certainly acknowledges that AI faces limitations, noting that “there are still a lot of little details to work out.” One of them is that the energy costs of training AI models are growing rapidly.
Microsoft recently announced a $30 billion fund to build AI data centers and the generators to power them. The veteran tech giant, which has invested more than $10 billion in OpenAI, also has a deal with the owners of the Three Mile Island nuclear plant (notorious for its 1979 meltdown) to power AI. This frantic spending suggests a hint of desperation is in the air.
Magic or just magical thinking?
Given the magnitude of these challenges, even if we accept Altman’s optimistic view of human progress, we may be forced to acknowledge that the past may not be a reliable guide to the future. Resources are finite and we are reaching our limits. Exponential growth may come to an end.
What’s most interesting about Altman’s post isn’t his glib predictions, but rather his unwavering optimism about science and progress.
This makes it hard to imagine that Altman or OpenAI are taking the technology’s “shortcomings” seriously. Why worry about minor issues when there’s so much to gain? Why pause to think when AI seems so close to winning?
Rather than an “age of intelligence,” what is emerging around AI is an “age of inflation”: inflated resource consumption, inflated corporate values, and above all, inflated promises of AI.
It’s true that some of us do things now that seemed like magic 150 years ago, but that doesn’t mean that all the changes between then and now have been for the better.
AI has amazing potential in many fields, but imagining that it holds the key to solving every problem humanity faces is also magical thinking.