March 29, 2023

Researches presented a technique to deliver malware through neural network models to evade the detection without impacting the performance of the network.

Tests conducted by the experts demonstrated embedded 36.9MB of malware into a 178MB-AlexNet model within 1% accuracy loss, this means that the threat is completely transparent to antivirus engines.

Experts believe that with the massive adoption of AI, malware authors will look with an increasing intered in the use of neural networks. This work could provide a referenceable scenario for the defense on neural network-assisted attacks.

The experts were able to select a layer within an already-trained model and then embeds the malware into that layer.

If the model doesn’t have enough neurons to embed malware, the attacker may opt to use an untrained model, which has extra neurons. The attacker then would train the model on the same data set used in the original model in order to produce a model with the same performance.

Experts pointed out that the technique is only effective for the hiding of the malware, not for its execution. In order to run the malware, it must be extracted from the model by using a specific application, that could be hidden inside the model only if it is enough large to contain it.

As a possible countermeasure, experts recommend the adoption of security software on the end-user device that could detect operations of extracting the malware from the model, its assembly and execution. Experts also warned of supply chain pollution on the providers of the original models.

The model’s structure remains unchanged when the parameters are replaced with malware bytes, and the malware is disassembled in the neurons. As the characteristics of the malware are no longer available, it can evade detection by common antivirus engines. As neural network models are robust to changes, there are no obvious losses on the performances when it’s well configured, concludes research.

Leave a Reply

%d bloggers like this: