Growing numbers of experts are warning of the risks that AI poses to society.
AI (Artificial Intelligence) has certainly been in the news of late. While the abilities and sheer scope of AI are clearly amazing there have been quite a few bumps in the road. Reports of bizarre behavior traits by AI-powered chatbots, and even insults and threats, have also surfaced. Now, it seems, the consensus among experts is that an unfettered and unregulated AI poses a serious risk to humanity.
“Godfather of AI” Regrets His Life’s Work
Dr. Geoffrey Hinton, a renowned computer scientist who is widely credited with laying the AI groundwork that eventually led to the creation of popular chatbots such as ChatGPT and other advanced systems, recently resigned from his position with Google saying that he is now able to speak openly about the risks of unrestrained AI development.
In a recent interview, Dr. Hinton said that he fears AI will become more dangerous in the future with “bad actors” potentially exploiting advanced systems “for bad things” that will be difficult to prevent. Dr. Hinton went on to say… “I console myself with the normal excuse: If I hadn’t done it, somebody else would have” ~ <source>
Elon Musk Calls For AI Safety Protocols
Elon Musk is urging for a six-month pause in the training of advanced artificial intelligence models following ChatGPT’s rise, arguing the systems could pose “profound risks to society and humanity.” Musk joined a group of over a thousand experts in signing an open letter calling for a moratorium on AI development:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources” ~ <source>
Steve Wozniac Voices His Concerns
Apple’s famed cofounder Steve Wozniak is also among those who have signed the open letter calling for more oversight of AI development. In an interview with the BBC, “Woz” voiced his concerns that the technology could be hijacked and used for malicious purposes if it falls into the wrong hands. He argued that A.I. is now so intelligent that it will make it easier for “bad actors” to trick others about who they are.
“A human really has to take the responsibility for what is generated by A.I. We can’t stop the technology, but regulation is needed to hold Big Tech to account when it comes to what their artificial intelligence tools are capable of doing. These companies, feel they can kind of get away with anything.” ~ <source>
You Read It First At DCT!
In an article published here at DCT earlier this year, I said the following:
“There must be safeguards. While there is little doubt that AI-driven technology is here to stay, it should also be incumbent on organizations such as Microsoft and Google to ensure that the technology they introduce is up to par and not prone to delivering false or misleading information prior to unleashing it en masse“.
Yes, I am giving myself a pat on the back. 🙂
—
Do we really want to have politicians regulate AI? Let’s get real. They can’t even balance a budget.
Agreed Norbert. However, nobody has suggested that a regulatory body would consist of politicians. I’d imagine any such body would involve a panel of experts who are qualified in the field.
And who is more qualified than AI.