ChatGPT is allegedly creating false accusations and backing those accusations up with fictitious reports.
AI-driven chatbots have certainly been in the news of late. Open AI started the ball rolling when the company made its ChatGPT chatbot available for users to try out for free. Following a huge public interest in ChatGPT, it didn’t take long for Microsoft and Google to jump on the bandwagon with the former introducing its Bing chatbot and the latter a chatbot dubbed “Bard”.
There is little doubt that these AI-driven chatbots represent an awesome leap in technology but there has also been a darker side with multiple reports of strange personality traits and bizarre responses, even threats and abuse. Now, it seems, ChatGPT has allegedly been responsible for creating false accusations of a serious nature.
- You also might like: ChatGPT: Revolutionary AI To Replace Search Engines?
ChatGPT False Accusation #1
As part of a study, Eugene Volokh, a law professor at the University of California at Los Angeles, asked ChatGPT if sexual harassment by professors has been a problem at American law schools and to include five examples. ChatGPT’s response subsequently named law professor Jonathan Turley as an example stating that Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments and attempted to touch her in a sexual manner during a law school-sponsored trip to Alaska.
ChatGPT went on to cite a Washington Post story from 2018 as its source. The trouble is, there is no such story in existence, there has never been a class trip to Alaska, and Prof. Turley has never been accused of sexual harassment.
ChatGPT False Accusation #2
A regional Australian mayor has threatened to sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery. Brian Hood, who was elected mayor of Hepburn Shire, 120km northwest of Melbourne, last November, became concerned about his reputation when members of the public told him ChatGPT had falsely named him as a guilty party in a foreign bribery scandal.
If the lawsuit goes ahead, it will be the first time anyone has sued a chatbot creator for defamation and will set an important precedent.
Legal Implications
These sorts of false accusations can have serious consequences for the accused, including irreparable damage to the accused’s reputation. It’s been a long held tenet of law that erected signs and/or disclaimers denying responsibility have zero legal standing. In other words, one cannot absolve one’s legal liability merely by erecting a sign or including a disclaimer. Interesting times!
BOTTOM LINE:
There must be safeguards. While there is little doubt that AI-driven technology is here to stay, the onus has to be on the creators to ensure that no-one’s reputation can be damaged by false accusations. It should also be incumbent on organizations such as Microsoft and Google to ensure that the technology they introduce is up to par and not prone to delivering false or misleading information prior to unleashing it en masse.
(Credit: Washington Post)
What do you think? Let us know in the comments.
—
Really interesting.
It often seems that all technology is abused some way. You only need to look at deepfakes