The Next Great Decoupling: AI Takes Control
Last night I binge-watched the latest three episodes of Star Trek Discovery, which set up the season 2 finale – spoiler alert – a battle royale between “Control” (an AI that is doing all it can to achieve consciousness, so it can wipe out all sentient life in the galaxy) and the crew of the U.S.S. Discovery, who will try to save the galaxy using only their wits, two star ships, and a time travel suit. While I have been ignoring the laws of physics (and computer science) and suspending my disbelief to improve the quality of my “Trekie” enjoyment since 1966, there was something thought-provoking about this science fantasy threat.
In his book Homo Deus: A Brief History of Tomorrow (Harper Perennial, 2017), author Yuval Noah Harari predicts a new version of the Great Decoupling is upon us. This time, instead of the economist’s version where the trend lines that reflected productivity, wages, jobs, and GDP growth seemingly decoupled, Harari suggests that we are on the verge of a new Great Decoupling: the separation of intelligence (AI) from consciousness (human). Harari is certainly not the first person to think of this, but I really like the way he writes. (He also wrote Sapiens: A Brief History of Humankind. Both books are great reads!)
In Homo Deus, Harari posits that if we successfully decouple intelligence from consciousness,
- 1. Humans will lose their economic and military usefulness, hence the economic and political system will stop attaching much value to them.
2. The system will still find value in humans collectively, but not in unique individuals.
3. The system will still find value in some unique individuals, but these will be a new elite of upgraded superhumans rather than the mass of the population.
Harari builds the case for these three apocalyptic prophecies by offering as axiomatic that “organisms are algorithms,” and (to paraphrase) that the algorithms are in control.
Intelligence vs. Control
Certainly, when an app such as Waze tells us where to go, it must “think” about how many vehicles it sends on any particular route. It was created to reduce travel time for vehicles on the road. It does a very good job, which is why people use it. In practice, the more people use it, the better it gets. It “learns.” Harari’s thesis assumes the totality of Waze as a giant algorithm. While that is not accurate, let’s just go with it for the sake of argument.
Is Waze in control? It depends on your point of view. Waze is telling you how to go. But it is not telling you why. It is not forcing you to take the suggested route (although you or your autonomous vehicle might decide Waze knows best). It is suggesting a route that has the highest probability of getting you to your destination in the shortest period of time.
When you use Google to research a topic, is Google in control? It certainly controls what you see at the top of your search results page. But (at the moment) it does not tell you what to search for or why.
To my knowledge neither Waze, nor Google, nor any other AI is conscious – but I have no way to comprehend computer consciousness any more than the computer can comprehend human consciousness. I am not a computer, and computers are not human – at least not yet.
Will humans become as useless as Harari suggests when consciousness and intelligence are decoupled? Maybe. But there is something more sinister and disturbing that may occur before his predicted future arrives.
What Star Trek Made Me Think About
When will all the data (or a significant amount of data) from all the disparate, specialized, purpose-built artificial intelligence systems be hacked into a single, massive artificial control system? Or even worse, several competing massive artificial control systems? We could call it Meta-AI or Artificial Control – but whatever we call it, it won’t be good for us.
It may be achieved with digital computational devices (the computers you already know and love) that represent varying quantities symbolically as their numerical values change. Or it may be accomplished with analog computational devices (you probably don’t own an electronic analog computer as they don’t run common software), which use the continuously variable aspects of physical phenomena to solve problems. Or some digital-analog hybrid. Or we may have to wait for quantum computers (which promise a level of computational power said to be exponentially greater than previous technologies) to go online. Of course, big government, big corporations, or nation-states may corner the market on quantum computing and use it for control, but that is for science fiction writers, not me, to deal with.
Whatever technology ends up being tasked with (or seizing) artificial control, the thought of artificial control scares me way more than the thought of rogue artificial intelligence. Even Harari’s dystopian future of useless (conscious but not intelligent enough) humans doesn’t make the hair on the back of my neck stand up in the same way. Once something (conscious or not) achieves artificial control, we will be somewhere new.
Other than the purveyors of fear, uncertainty, and doubt, most people think about AI as just another tool – the way we think about a hammer or a drill. But AI is not just another tool. Hammers don’t think about human needs or consider what needs to be built. Humans control hammers.
Artificial control will be another thing altogether. It will score our human needs by positively reinforcing behaviors that help it achieve its goals (whatever they may be). Then it will give us more of what we become addicted to until it actually changes our behaviors – sort of like social media addiction. Oh, wait – a nascent version of artificial control may already be here.
Shelly Palmer is Fox 5 New York's On-air Tech Expert (WNYW-TV) and the host of Fox Television's monthly show Shelly Palmer Digital Living. He also hosts United Stations Radio Network's, ...
more
No, no spoilers please!
I know there is a Utopean cult surrounding self driving cars. But they can't see in snow. They can't see where lanes are not strongly drawn. They cannot grasp cones. They cannot make high speed decisions. They cannot even operate at a 4 way stop. They cannot see so much. Humans can see.
Wow, with all due respect, Hal is not on the way. Ford says self driving cars will never able to be fully autonomous. The hype is wearing off but people are still trying hard.
And Bill Gates once said we'll never need more than 640kb of memory. Never say "never." That's a strong word. Maybe we're 5 years away, maybe 20, but we WILL get there.
Ford said never. Misallocated investment?
I agree with Adam. I think we're a long way off but only a fool says never. And if Ford said never... well then that was as short-sited as what Bill Gates said all those years ago.
Pretty much everything that people once said computers could never do well, they can do. They said they could never beat a person in chess. Done. They said speech recognition could never be mastered, Done. And so on and son on.
But those are narrow uses of AI. There won't be Hal driving on freeways.
Hal was due out back in 2001. Still nothing anywhere near resembling that level of artificial intelligence. Which I think is a good thing!