ChatGPT is a new AI-generated text and code. Check Point Research (CPR), cybercriminals are already leveraging OpenAI ChatGPT to create malicious tools, even those with no development skills. While the current tools are basic, there were real cases of threats to cybersecurity

ChatGPT AI-generated text and code: Real cases of threats to cybersecurity

  • March 29, 2023

ChatGPT is a new interface for the Large Language Model (LLM) that was launched by OpenAI in June 2020. This generated significant interest in the potential uses of AI. However, it has also brought a new dimension to the cyber threat landscape, as less-skilled threat actors are now able to launch cyberattacks more easily using code generation.

Check Point Research’s (CPR) analysis of several major underground hacking communities reveals that cybercriminals are already using OpenAI to develop malicious tools. Many of these cybercriminals have no development skills, and the tools presented in the report are basic. However, it is only a matter of time before more sophisticated threat actors enhance their use of AI-based tools for malicious purposes.

ChatGpT generated text and code pose serious threats to cybersecurity

Phishing, the act of sending fraudulent emails to deceive users into clicking on a link, is a vital application area of concern. Previously, these scams were easy to spot due to grammatical or linguistic errors, whereas text generated by artificial intelligence can now impersonate other people in a highly realistic manner.

Similarly, the use of ChatGPT to fabricate false social media activity can make online fraud more convincing. Furthermore, the AI’s capability to imitate the mannerisms and speech of specific individuals could lead to abuses involving propaganda, hate speech, and misinformation.

Apart from plain text, ChatGPT can also produce code in various programming languages, expanding the potential of malicious actors with little IT expertise to transform natural language into malware.

“It is highly important that security measures designed to prevent ChatGPT from producing potentially harmful code only work if the model has a clear understanding of what it is doing. If the prompts are broken down into separate steps, it is far too easy to circumvent these safety measures,” the Check Point Research’s report concludes.

Real cases of threats to cybersecurity

On December 29, 2022, a well-known hacking forum saw the emergence of a post titled “ChatGPT – Advantages of Malware”. The author of the post revealed that they had been utilizing ChatGPT to replicate the techniques and strains of malware documented in various research papers and articles on typical malware. To illustrate their point, they shared the Python code for an infostealer that scours for popular file formats, duplicates them to a random directory within the Temp folder, compresses them into a ZIP file, and then transfers them to a predetermined FTP server.

CPR analyzed the code and they could verify that the hacker’s statements are factual. The infostealer is rudimentary in nature and targets a dozen common file formats (e.g., MS Office documents, PDFs, and images) found throughout the operating system. If the malware detects any of these files, it copies them to a temporary folder, zips them, and transmits them online. It’s essential to note that the hacker didn’t encrypt or safeguard the files while transmitting them, making them vulnerable to being intercepted by third parties.

The second program created by the attacker using ChatGPT is a basic Java script that downloads PuTTY, a widely used SSH and telnet client, and secretly executes it on the system using Powershell. This code could be customized to download and execute any software, including popular malware variants.

Good to know: Chorus.ai auto call recording is a game-changer for sales team

On New Year’s Eve of 2022, a new instance of ChatGPT being utilized for illicit activities emerged, this time involving a different type of cybercrime. While the previous examples were malware-based, this instance was a conversation entitled “Abusing ChatGPT to create Dark Web Marketplaces scripts.” The forum user demonstrated how easily a Dark Web marketplace could be established using ChatGPT. The marketplace’s primary function in the criminal underground is to provide a platform for automated trade of illicit or stolen items like payment cards, hacked accounts, malware, drugs, and even ammunition, with all transactions conducted through cryptocurrencies. To showcase ChatGPT’s usefulness in this area, the user shared a piece of code that utilized third-party APIs to obtain real-time cryptocurrency (Monero, Bitcoin, and Ethereum) values for the marketplace’s payment system.

At the start of 2023, various threat actors began to post discussions on additional underground forums outlining the various ways ChatGPT could be utilized in fraudulent activities. Many of these centered around generating random art with another OpenAI tool (DALLE2) and selling it through legitimate platforms like Etsy. In another example, the user explains how ChatGPT could be employed to generate an e-book or a short chapter on a particular topic, which could then be sold online.

Limitations and risks of ChatGPT

Experts has stated that ChatGPT is trained on a dataset that only goes up to September 2021, which includes both the original chatbot version based on the GPT-3.5 model and the recently introduced version on GPT-4. This indicates that cybercriminals cannot use the neural network to plan crimes based on current information. Additionally, the GPT family models still generate “hallucinations” – fundamentally incorrect statements produced by the algorithm. ChatGPT can come up with a non-existent fact and insist on its correctness. Despite that, the primary limitation to ChatGPT’s functionality is set by its creators. OpenAI engineers have incorporated a system to filter out responses that may be harmful or offensive, but it can be bypassed. There are already online communities where people are looking for ways to break GPT’s rules by asking the chatbot to act as a third party and perform tasks on their behalf.


ChatGPT is known as a versatile AI model that can be customized to carry out diverse tasks.

While the European Parliament is refining its stance on AI legislation, policymakers are deliberating on imposing rigorous demands on this fundamental model, such as risk mitigation, dependability, and quality assurance.

Another peril is that these massive language models might become accessible on the dark web sans any assurances and be instructed on extremely damaging data. The principal issues for the future are the nature of data that will fuel these systems and how it can be regulated.