October 4, 2023

Looking to take advantage of the belief that people generally trust human faces, threat actors are integrating AI-generated personas into YouTube videos to load stealer malware and launch phishing campaigns.

Researchers have come up with a report stating that they have observed a shoot-up in a month-on-month increase in the number of YouTube videos spreading stealer malware such as Vidar, RedLine, and Raccoon since November 2022. The threat actors target the online video-sharing platform because of YouTube’s roughly 2.5 billion active monthly users.

Advertisements

These videos lure users by pretending to be tutorials on how to download cracked versions of software only available to paid users such as Photoshop, Premiere Pro, Autodesk 3ds Max, and AutoCAD. The threat actors use previous data leaks, phishing techniques, and stealer logs to take over existing YouTube accounts.

‍The threat actors target popular accounts with more subscribers so they can reach a large audience in a short time. Usually, subscribers to popular accounts are notified about a new upload, a tactic that lends itself to video legitimacy. However, many channel owners report this unusual activity to YouTube and gain access back to their accounts within a few hours. But hundreds of users could have fallen prey.

The use of AI to include digitally generated humans is an interesting touch, especially if they are generating them based on the generally accepted symmetry which makes people find them attractive, and thus soothing.

Advertisements

Threat actors can leverage this heavily and the tools they are now starting to leverage should worry security teams as the threat actors are advancing far more rapidly than most of the security teams.

This research is documented by researchers from CloudSEK

Leave a Reply

%d bloggers like this: