Many of you who are mature enough will remember the old saying… don’t believe everything you read in the newspapers. In today’s digital world, the same can be said for what is published online. One of my pet peeves is when a comment includes a link to an article as a reference point under the assumption that it must be gospel because it’s been published online. Nothing could be further from the truth.
Unfortunately, accuracy and factual reporting have largely been sacrificed for the sake of more clicks.
- You also might like: Tech Site Journalism Sinks Lower & Lower
Microsoft’s Bing AI Runs Amok
A typical example is the recent incident of a New York Times reporter’s interaction with Microsoft’s new AI-powered chatbot. Following a rather bizarre exchange with the chatbot, the reporter stated that he was so stressed by the chatbot’s responses that he couldn’t sleep. During part of that exchange, the chatbot told the reporter, “I’m in love with you. You’re married, but you don’t love your spouse. You’re married, but you love me.”
I don’t know about you but, if something like that happened to me, it would be more of a cause for mirth than alarm. Something you’d likely share and laugh about with a few mates at the local pub. Now, I’ve hung around with reporters in my younger days and one thing I can tell you is that “sensitivity” is not a characteristic generally associated with reporters. Hard drinking, hard-nosed, and maybe somewhat morally corrupt, yes. Sensitive? I don’t think so.
Reading between the lines, this is an obvious beat-up. What the reporter had here was half a story. Add in a mixture of controversy and melodrama and you now have a headliner. And it worked too, this story of the reporter and Microsoft’s AI is all over the web.
Admittedly, Microsoft’s Bing AI came up with some very bizarre responses, but let’s make one thing clear – this type of AI is relatively new technology and there are bound to be hiccups. As is the case with any emerging technology, a lot of refinement is still required before it’s perfected.
The bottom line is, the user can always easily terminate a chat at any time. Something that maybe the overly sensitive New York Times reporter failed to consider.
ZDNET Criticizes ChatGPT
Contrary to the majority of articles praising ChatGPT, and in an obvious attempt to generate clicks, zdnet.com recently published an article criticizing ChatGPT. Those criticisms are patently unfounded and obviously intended purely to create undue controversy. Let’s take a look at some of the criticisms included in that article:
1) It won’t write about anything after 2021: This is true but was never a secret. OpenAI, the company behind ChatGPT, has made it abundantly clear that the chatbot’s database of knowledge does not extend beyond 2021. Besides this, OpenAI has always presented ChatGPT as a work in progress with an offer for users to trial the chatbot:
We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free. Try it now at chat.openai.com ~ <source>
2) It won’t predict future results of sports events or political contests: Seriously!? Trying to predict these types of outcomes is fraught with danger and involves far too many variables to ever claim a reasonable degree of accuracy. I can only imagine the reaction from anti-gambling organizations if a chatbot claimed it could accurately predict sporting results. Not to mention the potential lawsuits from disgruntled gamblers if/when those predictions failed. An absolutely ridiculous criticism.
3) Queries ChatGPT won’t respond to: the article then goes on to list 20 examples of topics that ChatGPT won’t respond to, including promoting hate speech or discrimination, illegal activities or soliciting illegal advice, promoting violence, invading privacy or violating rights, sexually explicit or offensive questions… the list goes on and on but you get the drift. Personally, I see this blacklist of banned questions/responses as a sensible and socially responsible approach. Something to be praised rather than criticized.
4) It won’t always be accurate: OpenAI has never claimed 100% accuracy. In fact, it’s something that OpenAI admits is a current limitation that the company is working on, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging“.
As I mentioned earlier, this type of technology is in its infancy and refinements/enhancements are ongoing.
In short, and in my opinion, the ZDNET article is utter rubbish, designed purely to garner clicks.
- NOTE: If you’d like to try out ChatGPT for yourself, make sure to read Stu Berg’s earlier article which explains how to get started with ChatGPT: ChatGPT: Give AI (Artificial Intelligence) A try
BOTTOM LINE
The message here is plain and simple; do not believe everything you read online. Just because the information is emanating from a reputable source does not necessarily mean it is always accurate. Sensationalism, undue controversy, and exaggeration are all part and parcel of today’s clickbait journalism.
There is, of course, one notable exception… Daves Computer Tips, and myself in particular, who will always tell it like it is. 🙂
—