GPT4, the latest version of the AI engine behind ChatGPT, could be making it easier for bad actors to build malware programs and create new cybersecurity threats, according to an expert.
“Good actors can use ChatGPT to craft and stitch code that is useful to society, but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime,” said Oded Vanunu, head of products vulnerabilities research at Check Point Software.
ChatGPT is taking the world by storm with its ability to convincingly perform a plethora of written word activities that previously stood beyond the power of artificial intelligence tools available to the average user.
OpenAI, the company behind GPT technology, unveiled GPT-4 last week, producing a new wave of awe and concern among experts.
OpenAI recently struck a $10 billion deal with Microsoft MSFT, which has included the technology in its search engine Bing. The partnership has shaken the grounds for the previously undisputed search engine leader Alphabet Inc GOOG GOOGL and its subsidiary Google.
ChatGPT’s potential capacities to replace white collar workers as well as to produce massive amounts of fake or inaccurate information have been noted by tech commentators since version 3.5 of ChatGPT was released to the public in November 2022.
Yet, the AI’s lesser discussed —but equally powerful— programming abilities are now becoming a source of growing concern for researchers in the cybersecurity space.
Also Read: Microsoft CEO Satya Nadella Trashes Company’s Voice Assistant, Says It Was ‘Dumb As A Rock’ Compared To Current AI
Last week, the newly-released version of the ChatGPT was able to write the code for classic video game “Pong” in less than 60 seconds, with minimal input from a user.
Check Point Research, the research arm of cybersecurity company Check Point Software, has identified new abilities offered by GPT4 that open up various scenarios allowing threat actors to streamline malicious efforts and preparations. According to the company, these could result in “quicker and more precise outcomes to accelerate cybercrime.”
Check Point has previously found ways in which ChatGPT can be used by hackers. After the release of the new platform, Check Point said that there was “a clear improvement on many levels,” yet several scenarios persist in which “bad actors can accelerate cybercrime in ChatGPT4.”
“ChatGPT4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” said Vanunu.
The executive said that bad actors can also use ChatGPT4’s quick responses to overcome technical challenges in developing malware.
- The company has used ChatGPT4 to, among other things, build a malware program that collects PDF files from a user’s computer and sends them back to the hacker.
- Other possibilities include using ChatGPT in phishing, which is a technique used by hackers to retrieve sensitive user information by impersonating an official entity.
- The company successfully used ChatGPT to streamline the creation of a text that a hacker would use to impersonate a bank via email and get users to input their account information in an unofficial website.
- In another case, Check Point used ChatGPT to write a fake email that a hacker could use to trick employees of a company into sharing their corporate credentials with him.
- In a fourth case, the company was able to use ChatGPT to write the code for a “reverse shell,” which is a type of malware used to gain access to a user’s computer through the internet.
While ChatGPT initially denied requests that included the words “malware” and “phishing,” the team at Check Point Research was able to trick the AI into creating these texts and lines of code by bypassing the use of those words with other phrases of similar meaning.
Now Read: OpenAI Temporarily Pulled Off ChatGPT to Fix Bugs
Photo by Pixabay on Pexels.
Read the original article