Overview
In a recent podcast with the Lex Fridman, Roman Yampolskiy, a computer science lecturer at the University of Louisville and a respected figure in artificial intelligence (AI) research, estimated that there is a 99.9% percent probability that AI could lead to human extinction within the next 100 years.
This is higher than the consensus of 2,778 researchers who have published in top-tier artificial intelligence (AI) venues. In this survey, the consensus of the researchers suggested that there is a five percent chance of human annihilation.
These predictions of AI destroying mankind are in between Elon Musk’s prediction of AI destroying humanity. Elon believes that there is a 10 to 20 percent chance of this happening. Elon said, “I think there’s some chance that it will end humanity. I probably agree with Geoff Hinton that it’s about 10% to 20% or something like that.” Previously, Elon stated that “AI could turn out bad”. Yes Elon, wiping out humanity could be “bad”! Of course, Elon Musk may be underestimating his prediction because he is trying to get even richer through AI. Elon has founded two AI technology companies: OpenAI (which he co-founded with Sam Altman) and xAI (which he launched last year).
99.9%
But 99.9 percent! Maybe we should listen more to what Mr. Yampolskiy is saying. “Creating general superintelligences may not end well for humanity in the long run,” Yampolskiy stated that “the best strategy might simply be to avoid starting this potentially perilous game.”
Yampolskiy believes that the likelihood of an AI Armageddon depends on whether humans can create complex AI software without any errors, which Yampolskiy believes is unlikely because in every AI model so far, humans have tried to get the AI to do something it wasn’t designed to do.”They already have made mistakes …We had accidents; they’ve been jailbroken. I don’t think there is a single large language model today, which no one was successful at making do something developers didn’t intend it to do.” He went on to say that “these systems have been … used in ways developers did not foresee.”
Earlier this year, in my article “Is AI Politically Biased? If So, Why?” I wrote about how this was demonstrated in Google’s new Gemini Large Language Models (LLM). Gemini refused to produce images of white people. Instead, Gemini created historically inaccurate images such as Asian and black Nazi soldiers, black and female popes, and black, Asian, and Native American U.S. Founding Fathers.
Yampolskiy went on to say that trying to control and manage a system capable of making billions of decisions in a microsecond is an enormous challenge. No matter how many resources we throw at the problem, the risk of AI wiping us out never goes away.
But, We Can Just Unplug It!
I have heard some experts claim that if AI gets out of control, we can just stop it. Eric Schmidt, the former CEO of Google has invested in several AI startups (another expert billionaire trying to get even richer from AI). Speaking at the annual VivaTech conference in Paris, Schmidt believes that the development of AI poses dangers and that the biggest threats have not yet arrived. But Schmidt believes he has the answer: “By the way, do you know what we’re going to do when computers have free will?” Schmidt said at the conference. “We’re going to unplug them.”
But Eric, AI is not some simple program running on a PC sitting on a desk somewhere that we can just unplug from the wall. AI is a large, advanced, complex, and networked software system, and it may be sentient. Pulling the plug may be difficult or impossible.
Bottom Line
20, 10, even a 5 percent chance that AI will wipe out our existence seems like a risk too high to take. But 99.9 percent! That is frightening. Maybe more people, and not just those investing in AI for profit, should be concerned about this risk.
Let me know in the comments if you see a risk of AI wiping human existence off the face of the earth. If so, what do you think the odds are? And what, if anything, do you think should be done about it?
—
Doesn’t it bother anybody that the descriptive adjective in AI is ARTIFICIAL? In other words, not genuine!!
Tis likely robots will learn how to be a nasty human getting downloads from the many books depicting humans as tyrants, bullies, terrorists. Newspapers, magazines, publications, newsletters reveal the different sides of humans. In 1895 a man from Cosovo came to the states. His play Rossoms Real Robots showed robots taking over and killing all humans. Find the play in the Science Fiction book from The Great Books Foundation. Published could be 15 years ago. What is the greatest threat to humans? A lady robot responded. Robots are the greatest threat. I say teach a robot to misuse power of words and deeds as humans do, It will do the same. We are warned.
Wow, I’m overjoyed that someone is trying to pay at least a little attention to this eventuality.
Strangely, I’ve been railing for years against nations scraping analog and human control of our nations infrastructure and moving the whole control to digital. Really! Sure, it puts more $$$ in the pockets of the billionaires by eliminating jobs and supposedly makes systems more efficient. But at what risk? Yes, we humans are inherently flawed, slower, prone to getting tired, and prone to errors, etc, etc. But on the other hand, if an ‘evil doer’ decides to take out the communication network for a city, they may have to coordinate many towers and line above and below ground before being able to accomplish that. Whereas being all digital, they hack into the mainframe and voila, they’re good to go with the flip of a switch.
What does this have to do with AI?? Well we are an inherently flawed lifeform with prejudices of all sorts, divergent ideas about what’s right-or-wrong, who’s good or bad, etc. And it’s those flawed yet highly intelligent humans who’re creating the code for AI. So how is it that none of their particular “humanity” will be written into an AI program. In addition to the examples that were already stated in Mr. Durso’s article.
Hey billionaires, make good paying available jobs for people, have supervisory people in place as safety nets. I think I could tolerate the occasional glitch in my power supply or cell phone not receiving a call or two, rather than living with the prospect of wholesale annihilation of a lifeless entity that thinks billions of times faster that I do and has a physical presence that infinitely more resilient than mine.