OPAD Attack on AI
Researchers have discovered a new adversarial attack that can fool AI technologies. This new attack OPtical ADversarial attack (OPAD) is based on three main objects – a camera, a low-cost projector, and a computer – that are used to perform the attack.
OPAD is based on a low-cost projector camera system in which researchers have projected calculated patterns to modify the appearance of the 3D objects.
To perform the attack, researchers modified the already existing objects seen by AI. For example, they have modified basketball images and presented them as something else. It was performed by projecting some specifically calculated patterns onto the images.
OPAD is non-iterative, can target the real 3D objects in a single shot. This attack can launch untargeted, targeted, black-box, and white-box attacks as well.
The first method that distinctly models the environment and instrumentation. The adversarial loss function in the OPAD optimization is clearly visible to the users. One of the critical factors of such an attack is that no physical access is required for the objects. OPAD attacks can transform any known digital results into real 3D objects.
The feasibility of this attack is only limited to the surface material of the object along with the saturation of object color. OPAD can be used to fool self driving cars that could become the reason behind intentional accidents or pranks. For instance, it can represent a STOP signal as a speed limit signal. Security cameras with AI can be fooled, resulting in serious consequences.
The successful demonstration of OPAD shows the possibility of using an optical system to modify faces or surveillance tasks.OPAD showed that organizations developing AI technologies should stay alert regarding potential security problems from within the AI models. Also, they should invest more in the security and testing of AI technology before real-world use.