AI Versus Humans: The 'Singularity' Keeps Getting Postponed
Image Source: Unsplash
Sam Altman is the CEO of the most visible artificial intelligence (AI) organization on the planet, OpenAI, the purveyors of the popular ChatGPT AI interface. His job is to keep the investment dollars flowing into OpenAI, tens of billions of them. So, it's pretty important for Altman to keep investors interested and to promise them breakthroughs -- and also, apparently, to reschedule those breakthroughs when they don't occur.
Now, something that most people don't understand about OpenAI is that despite being recently valued at $500 billion, OpenAI loses money, reportedly $5 billion last year on sales of $3.7 billion. Back in 2023, Altman was telling the public that OpenAI had achieved artificial general intelligence (AGI).
For those who don't know, AGI means intelligence capable of learning and executing all the tasks that humans can do. No one appears to know how exactly to measure whether a machine can do the totality of things a human can do, but it sounds very cool to talk about it. And, it's the kind of talk that keeps investors excited. The implication, of course, is that the investor class won't have to put up with pesky employees much longer for most jobs.
The moment when this happens, when machines become smarter than humans and start running everything for us—as if they don't already—is sometimes called the singularity, usually with a capital "S." Now, singularity has a specific meaning in physics. In this context, it refers to a nonreligious, tech version of the rapture in which technological advancement becomes very rapid as machines take over and iterate on technical innovation.
We know what happens to people in the religious version of the rapture—some ascend to heaven and others are left behind. But we aren't exactly sure what is to become of humans after the so-called singularity version of the rapture since supposedly there won't be much work to do. AI will be doing it.
But we may not have to concern ourselves with such things for the moment. Apparently, Altman's 2023 proclamation that AGI had been achieved didn't stick. But Altman was back in December 2024 telling the public that AGI would be achieved in 2025. Admittedly, 2025 isn't over, so I suppose AGI could be achieved by Dec. 31. But Altman apparently sees the writing on the wall and isn't waiting for the end of the year to move the goalposts once again, this time to 2030. However, 2026 remains a popular prediction among many others.
Just so you know, predictions of this momentous event are all over the place, some reaching out to 2060, and, not surprisingly, the predictions change over time. But I'm willing to make the following outlandish prediction that under current approaches which rely on so-called large language models (LLMs), AGI will happen never.
There are several reasons I say this—apart from the difficulty of defining what "intelligence" means which would require an entire essay by itself. Before getting to those reasons, let me say that I believe current AI development will result in some viable and possibly profitable applications. Clearly, people are using AI interfaces such as ChatGPT and gaining some benefit from them. But that is a far cry from LLMs taking on the lion's share of tasks currently performed by humans.
I remember when people said the introduction of the automated bank teller would lead to the extinction of tellers. It's been 50 years, and I can report that bank tellers are still working in the lobby of my bank. Machines, even machines directed by AI, are good at specific tasks. But they are unlikely anytime soon to replace the generalized skills of humans.
So, here's why the LLMs that power current AI are limited in what they can do. First, they are based on language. Language is an inherently imprecise tool of communication. Words have multiple meanings. Just look in any dictionary. And those meanings drift over time based on actual usage. That's why dictionaries are constantly updated.
And, words are always understood in context. Context means the entire cultural and physical setting to which the words apply. Humans have a natural talent for language and they learn language within specific cultural and physical settings, relating that language to all five senses and placing the meaning of specific words and sets of words in the context of gestures and attitudes which accompany their utterance.
Machines don't have the chance to learn language in this way; nor do they have the full set of senses (and it's not clear what it would mean if they did). In fact, LLMs simply hoover up a lot of text from a so-called "training set" and use that text to predict what the next word will be regarding the subject about which someone requests information.
Humans can put language and other symbols in the context of their own lived experience. Machines by definition are not capable of lived experience in the manner of humans. Humans' lived experience becomes the basis for judgement, something machines cannot develop. Within judgement I include hunches, intuitions, and vague remembrances and connections that often inform human decisions and sometimes form the basis for new ideas and discoveries.
Second, the language of computers is code. Code is a seriously pared down version of language and so much more limited in the ways it can represent reality. I can tell from the current discourse that the biggest boosters of AI almost certainly read few novels (except perhaps some science fiction novels). If they did read more serious non-science fiction novels, they would understand what a monumental task it is to try to describe reality to a reader in words and why the attempt will always fail.
Instead, what great authors do is provide enough of just the right details about settings, characters, dialogue, and action to ignite the imagination and lived experience of readers who then create in their minds a version of the world that the author is trying to convey. In other words, humans can create models of the world in their minds and consider possible meanings and trajectories that flow from those models. That's a very complex task.
Machines, no matter how sophisticated, cannot imagine a world based on such clues as an author might give them because they do not "live" in the world the way humans do.
Third, there is a very important corollary to points one and two, namely, the map is not the territory. It's a simple concept really. But it is easy to forget when you are a person who is marooned in the land of bits, bytes, and computer animation and believe that computers are somehow giving you an accurate representation of the reality we live in.
What AI tells us is based on models, not lived reality; models based on imprecise and ever changing language. AI may provide some useful information because of its ability to synthesize huge amounts of text, but it cannot convey understanding. It is merely giving us a map and a very partial and often mistaken one at that.
Fourth, AI is not going to replace human expertise. The idea that AI is going to become expert at every topic is already being shown to be nonsense. Humans embody expertise and share some of that in books, articles, recorded speeches and interviews, and graphics. But we could not produce the next generation of chemists using only books about chemistry.
Knowledge is not just words on a page. Knowledge is something that is embodied in those who have it in the inflections they use in speech, the physical moves they make in the lab, the relationships they develop with their students and colleagues, the ideas they choose to emphasize in their work, and the overall style of their lives.
Try learning how to operate in a restaurant kitchen without ever actually going into one. The same goes for a laboratory, both for students and expert researchers. In addition, there are many bits of knowledge that might have been written down but which never make it to the page. Written documents are an outline or prompt to knowledge. They cannot be all-inclusive.
A friend who is a practicing attorney uses AI to compose routine contracts and agreements, models for which abound on the internet and which are therefore available for AI to sweep into its databases. And, of course, the law generally prescribes narrow parameters for such documents. That makes AI less error-prone in composing them. Nevertheless, this attorney has to correct things which are wrong and, of course, modify text where the AI engine has not quite gotten the nuance right. AI is useful to her, but it cannot replace her expertise; and someone who has no expertise and yet uses such raw output, presenting it as authoritative, is a positive danger to society. AI will be useful to experts, but it cannot replace them.
Many investors are betting that Sam Altman will be right about the advent of AGI. When they figure out that he's not, the curtain will come down on the AI stock bubble and probably take the whole economy with it. That's usually what happens when it becomes clear that the new era prophesied by the industry gurus of the latest "big thing" is just like old eras; there may be some genuine progress, but the value of the progress has been poorly understood and greatly overestimated.
"Trees do not grow to the sky" is an old German proverb. Nor do AI stocks rise forever. Every generation must learn the hard way that financial manias always end badly, even if the underlying companies provide some value that must be marked down to its actual contribution to society.
More By This Author:
The Trouble With Copper TariffsBismuth: Another Critical Metal Gets Squeezed
Trade War Vise Grip: China Is Squeezing Rare Earth Supply And It's Hurting