WormGPT: Dark web’s new AI weapon for cyberattacks

Cybersecurity experts have warned that a new generative AI tool called WormGPT, which is being sold on the dark web, poses a serious threat to businesses and individuals. 

What is WormGPT?

As per Popular Mechanics, WormGPT is an AI module based on a 2021 GPT-J language model. GPT-J is an open-source large language model that can generate human-like text, similar to ChatGPT. Unlike ChatGPT, which is run by OpenAI, a private research lab that imposes some anti-abuse restrictions on its platform, such as preventing it from using bad words, writing hate speech, or writing viruses, WormGPT is open to anyone who can access, share, or modify its source code. This means that WormGPT can do whatever hackers want it to do. See Also

Writing targeted BEC attacks is just one of the many capabilities of WormGPT. It also has advanced features, such as unlimited character support, chat memory retention, and code formatting capabilities. Its ability to write and format code is especially dangerous, as it can create malware attacks.

The creator of WormGPT is a 23-year-old Portuguese hacker who goes by the name of “Last.” He claims that his tool can do “everything blackhat related that you can think of” and that it can help anyone to make money online by engaging in illegal activities. He has posted an advertisement on the dark web, where he offers access to WormGPT for a fee.

As theSunday Morning Heraldreports, Patrick Butler, managing partner at Tesserent, an Australian cyber firm, said many criminals have bought or rented WormGPT and are using it to launch sophisticated phishing campaigns, identity theft, and malware attacks. He noted that WormGPT can generate phishing emails in different languages with flawless grammar and spelling, making them hard to distinguish from legitimate ones. He also said that WormGPT can create new variants of malware that can evade some traditional detection tools and that it can even help hackers exploit known vulnerabilities in systems.

Butler warned that developers should not use legitimate AI tools to review their code, as their code may be used to train AI models that hackers can access, giving them more insight into organizational systems. He also said that the number of threat actors would likely increase as generative AI made cyberattacks more accessible and easier to execute. He noted that the Tesserent Security Operations Centre had observed a rise in phishing and malicious email activities targeting Australian organizations, especially after the emergence of WormGPT and similar tools.

Butler said that there are at least six other generative AI tools available on the dark web, such as FraudGPT, EvilGPT, DarkBard, WolfGPT, XXXGPT, and WormGPT, and that more are being developed. He said these tools are less potent than public-facing tools like ChatGPT and Bard, but they are spreading fast, making them difficult to track and stop.

Scott Jarkoff, director of intelligence strategy, APJ & META, at CrowdStrike, another cybersecurity firm, said that the situation was worsened by the ongoing conflict in the Middle East, which was creating more opportunities for hackers to lure victims. He said that hacking groups from Russia, China, North Korea, and Iran, the “big four” of cyber warfare, used generative AI tools to craft attacks in perfect English.

He said that the Israel-Hamas conflict was providing a perfect pretext for hackers to ask people to visit malicious websites or donate to fake causes. He urged everyone to be more careful and vigilant about cybersecurity and use reliable tools and services to protect themselves.

Dan Schiappa, chief product officer at Arctic Wolf, another cyber vendor, said that generative AI was not only being used to create realistic phishing emails but also to supercharge social engineering. He said that hackers were using AI to create fake accounts on social media and other platforms and to spread misinformation and propaganda.

He said that China recently arrested a man for using ChatGPT to create a fake news story of a train derailment and that he would not be the last person to use the technology to create chaos. He said that generative AI was a double-edged sword and could be used for good or evil.

He said that businesses and individuals should be aware of the potential risks and benefits of generative AI and that they should verify the sources and authenticity of the information they receive. He also said that they should use advanced cybersecurity solutions that can detect and prevent generative AI attacks.

What do you think?

39 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

U.S. authorities to seize $54m in Ethereum connected to narcotics trafficking

Scam warning issued to anybody with more than £24,000 in their bank account