ChatGPT Exploited by Threat Actors
Threat actors are using ChatGPT to develop powerful hacking tools and create new chatbots designed to mimic young girls to lure targets.
ChatGPT can also code malicious software that can monitor users’ keyboard strokes and create ransomware. For your information, ChatGPT has been developed by OpenAI as an interface for its LLM. The code generation capability can easily help threat actors launch cyberattacks.
Scammers were seen exploiting ChatGPT to create convincing personas. Scammers are creating female personas to impersonate girls to gain trust and have lengthier conversations with their targets.
The attacker could create an authentic-looking spear phishing email to run a reverse shell that can accept commands in English.
In one of the post, hacker shared an Android malware code written by ChatGPT, which could steal desired files, compress them, and leak them on the web. Similarly, a user-shared Python code is capable of file encryption using the OpenAI app. These codes could be used for malicious purposes and modified to encrypt a device without involving user interaction, just like with ransomware.
Threat actors can also use ChatGPT to build bots and sites to trick users into sharing their information and launch highly targeted social engineering scams and phishing campaigns.
OpenAI is yet to respond to these findings. This research was documented by researchers from Checkpoint