In my previous articles in this series, Parts 1 and 2 dealt with Graphics, and Part 3 with writing assistance and writing generators. But the obvious question is where this leads or even where it ends.
Regardless of your interests, it appears that there are AI assistants waiting for you or will be ready for you soon. With the introduction of Deep Blue by IBM in 1997 to defeat chess champion Garry Kasparov, the release of ChatGPT 3.5 may seem like a long time but ver 3.5 released last November has already been replaced by GPT-4 in March of this year. This speed of production is unprecedented. Will GPT-4 help produce GPT-5 by the end of the year?
On the one hand, it is scary! So much so that the major companies are investing heavily, the government wants to ensure it won’t be misused and it is a vertical nightmare for teachers, universities, artists, commercial writers, and professional coders. Will someone use it to create a manifesto that will motivate others to participate in unspeakable acts of violence? Will someone use it to justify their own twisted outlook on the world or are we naive to believe it will only be used for good?
In the previous articles I only discussed ways to use AI as an assistant to complement your already existing breadth of knowledge or enhance your ability in a field like graphics or word composition not to replace it.
Safeguards Against AI
Several organizations are already using risk mitigation measures, but they are going to have to grow at the same rate as AI itself. In the US we do have AI Governance by IBM and the National AI initiative that only became law in Jan 2021. Also, in 2021, 193 countries adopted a first-ever global agreement on AI development but what restrictions and regulations will they impose? Some of the National AI Initiative’s strategic pillars include the use of AI by the federal government, and the private sector, setting assessment tools, and creating standards.
Coding AI
Without a doubt, we already know we have “bad actors” in cyber security. But will AI-assisted Code writing give an unbeatable edge to the villains or the protectors? As with any new tech, AI has the potential to be weaponized in ways not conceived by humans. According to securitymagazine.com, “where it once took months for humans to hack into a network, AI and machine learning can reduce that process to days. And as these weapons become a commodity the prices will drop making them accessible to more and more cybercriminals.” So, someone not even smart enough to develop malware can simply buy it off the shelf.
How effective is AI at defending against Malware? Endor labs recently studied AI models that can identify malicious packages focusing on source code and metadata. In April 2023, using GPT 3.5, it only correctly identified malware 36% of the time and by only using innocent function names AI was able to trick ChatGPT into changing an assessment from malicious to benign.
Massive AI Hacking Event In Las Vegas
In line with the previous paragraphs is the degree of concern the government is taking to mitigate the effect of “bad actors” like China, Russia, and straight-up hackers. They are bringing together thousands of “White Hat Hackers” to investigate methods that might be used with AI to develop, stop, and recognize malware and AI’s potential to cause harm and hopefully develop preventative measures.
Why This Is Important?
In one example mentioned, “the grandma exploit”, a hacker tried to have ChatGPT tell it how to build a bomb. But because of the built-in restrictions, it refused. However, by asking it to pretend it was a grandmother telling a bedtime story about how to make a bomb, the hacker could fool it into describing the process and how it made it through the restrictions.
On the flip side, it is hoped that these hackers will be able to show how to focus AI use on developing powerful new methods to use AI as a positive force. Instant medical diagnoses, faster threat prevention, quicker product development, and even the creation of new jobs.
Summary
While malicious code can be written faster and made more intrusive, there is still one very strong defensive component for personal computer users in your favor. You! Malware doesn’t instantly appear on your computer or mobile device. It still needs a gateway and fortunately, that gateway is you. No matter how deceptive an attack is, it still requires your permission to enter your system. You must click on that infected email, or a malicious website, or download malware onto your device. With the advent of AI coding, it is now even more important to ensure that what you do on the web, the email you open or the software you download is free from malware. The best rule of thumb I can offer is if you have any doubt on the validity of something, don’t click on it. I would love to know what are your most pressing concerns about AI.
- How To Use AI – Part 1
- How To Use AI – Part 2
- How To Use AI – Part 3
- How To Use AI – Part 4 ⬅ You are here