
The popularity of ChatGPT has sparked a boom in ‘generative Artificial Intelligence AI’ products that can create new passages of text, images, and other media. The tools have raised concerns about their propensity to spout falsehoods that are hard to notice because of the system’s strong command of the grammar of human languages. It has also raised concerns over using fake content and the rise of cybercrime.
WormGPT, a ChatGPT alternative, has made our fears come true as cyber criminals are using the AI tool to launch sophisticated phishing attacks.
Like ChatGPT, WormGPT is an AI model based on a generative pre-trained transformer model (GPTJ) language model that was designed to craft human-like text. Unlike ChatGPT or Google’s Bard, WormGPT does not have any safety fence to stop it from responding to malicious content.
WormGPT allows users to do all sorts of illegal stuff. It allows bots to write malware content in the Python coding language and also create persuasive and sophisticated emails for phishing or business email compromise (BEC) attacks.
This means cybercriminals can create convincing fake emails to target concerned persons for phishing attacks. In short, WormGPT is similar to ChatGPT with no ethical boundaries.
The research team also conducted an evaluation of the potential risks associated with WormGPT, with a specific focus on BEC attacks, instructing the tool to generate an email aimed at pressuring an unsuspecting account manager into making payment for a fraudulent invoice
Risks of Using WormGPT
- Legal consequences
- Malware generation and phishing campaign
- Sophisticated Cyberattacks
Nevertheless, the emergence of WormGPT serves as a grim reminder of the potential dangers posed by generative AI programs as they continue to mature. Cybersecurity experts are increasingly concerned about the proliferation of such malicious tools, emphasizing the urgent need for robust measures to counteract the growing cybercrime threat.
As the battle between cybercriminals and cybersecurity professionals continues, the development and deployment of AI models like WormGPT underscore the need for heightened vigilance and collaborative efforts to safeguard individuals, businesses, and societies from the escalating risks of cybercrime.
Preventing AI-Generated Phishing Attacks
- Email verification: There needs to be a stringent email verification process. AI tools have the ability to produce sophisticated and highly persuasive emails, so we need to carefully check email IDs, dates, and other details.
- Firewalls: High-quality firewalls act as buffers between you, your computer and outside intruders. We should use two different kinds: a desktop firewall and a network firewall.
- Be informed about phishing techniques: We need to be aware of the new phishing scams that are being developed over time