OpenAI's ChatGPT, an AI-powered natural language processing tool, has made waves since its launch in November, attracting over 1 million users for a range of applications, including creative pursuits like poem writing and email campaigns, as well as code generation for websites and apps. In response to worries that it could be used for cheating, some schools have banned the use of ChatGPT and Google, who followed the trend introducing their own AI-powered chatbot called BARD.
OpenAI has confirmed that the popular online tool lacks internet access, thereby limiting its ability to provide real-time answers to user queries. Instead, the program generates modified or inferred code based on preset parameters. While ChatGPT strives to help across a range of topics, its content filters serve to prevent it from responding to questions that could pose security risks, such as code injection. However, the exploration of bypassing these filters and the potential using ChatGPT to create polymorphic malware over the last few months has moved this topic from a hypothetical scenario to a very real concern.
This is adding a potential dark side to ChatGPT, where it can be used to carry out a range of malicious activities, including phishing scams, spam messages, social engineering attacks, fraudulent customer support, and the spread of false information. Just weeks after ChatGPT debuted Check Point, a software company based in Israel, decided to combine ChatGPT and another AI-based system that translates natural language to code, to create a phishing email with a malicious Excel file. The file was weaponized with macros that downloads a reverse shell, which is one of the favorites among cybercrime actors. Check Point did not write a single line of code and instead let the AIs do all the work.
The cybersecurity community, which has historically been wary of the potential consequences of modern AI, is now also taking note due to fears that a tool like ChatGPT could be exploited by hackers with limited resources and no technical expertise f.ex. to accelerate the process of extracting usernames when enumerating against a login screen or produce authentic phishing emails.
In the case of phishing scams, ChatGPT cannot only be trained to write codes but also to create emails that appear even more authentic, especially when English isn’t the receivers first language. This allows even cybercriminals with very little knowledge to distribute spam messages and spread malware, collect data, and cause harm to systems and networks. ChatGPT's ability to mimic human-like interactions can also be used in real-time and makes it a powerful tool for social engineering attacks by impersonating customer support or via a live chat on social media.
Others believe that tools like ChatGPT will be a sizable force multiplier when it comes to cyber defense and will change the game when it comes to trying to learn the secrets of malicious code, as we do not have many malware analysts in the world right now.
ChatGPT was trained on a massive corpus of 570GB of textual data derived from a variety of sources, such as books, social media posts, and online articles. This data contained approximately 300 billion words, which would take a human thousands of years to read.
In the grand scheme of things, the use of ChatGPT for malicious cyberactivities looms as a major concern, but not one that is set in stone. Undoubtedly, ChatGPT's capacity to mimic human-like language has the potential to become a potent weapon for cybercriminals, unleashing a wave of phishing scams that could cripple organizations and individuals alike. Yet, with the right awareness and security protocols in place, the risk of such attacks can be mitigated. However, it is incumbent upon the tech industry to keep a watchful eye on the development and potential abuse of these advanced AI language models, and to pioneer robust safeguards that can identify and thwart malicious exploitation.
Subscribe to our newsletter to receive new posts straight to your inbox