2024 saw the IA Act published on July 12. A time-based regulation, it nevertheless remains essential for cybersecurity in any field that uses or will use AI, such as healthcare.
A very common attack model, the evasion attack, can quite easily create huge problems in a huge number of sectors. In this article, we’ll look at why you need to secure your AI systems.
Evasion attacks, or how to destabilize AI systems
An evasion attack occurs when the network is fed with what is known as a “contradictory example”, which is a carefully perturbed input that is almost identical to its unaltered human copy, but which completely destabilizes the AI system. In other words, the aim is to create the equivalent of an optical illusion for the system by introducing judiciously calculated “noise”, whatever the type of input taken by the AI system (image, text, sound, etc.).
Contradictory examples are based on the difficulties of AI models using machine learning to “generalize”, i.e. to correctly model the requested task based on a limited learning set.
Examples of evasion attacks
Image classification
The best-known examples of evasion attacks are those applied to image classification. In our first example, which is particularly known, we seek to modify the classifier output (here “panda” to “gibbon”) by adding a vector imperceptible to the human eye.
Facial recognition
By placing “adversary glasses” on people, it is possible to fool state-of-the-art facial recognition devices. The system used here is the high-performance Face++ from Megvii. The example illustrates in particular how, by affixing such a pair of glasses, the output of the recognition system is altered, giving the actress Reese Witherspoon the identity of the actor Russell Crowe (right). As before, these pairs of adversary glasses can be printed to prevent an individual from being recognized if a photograph of him or her were to be used for recognition.
Person detection can also be the subject of evasion attacks based on contradictory examples. A T-shirt with “adversary print” prevented the system from detecting the person.
Detecting and reading road signs
Several studies have shown that it is also possible to carry out evasion attacks on traffic signs. This type of attack is likely to have far-reaching implications for the safety and security of connected and autonomous vehicles. The example given, on a stop sign, corresponds to graffiti which is interpreted by the system not as a stop sign but as a 45 mph speed limit. This also explains why AI systems in autonomous vehicles are considered high-risk systems: the consequences of a change in their behavior can be very serious for the lives of people – drivers, pedestrians and passengers alike.
A practical example: evasion attacks in the healthcare sector
Let’s imagine that an attacker manages to break into the information system of a hospital using an AI system to help detect carcinogenic cells on images from an MRI machine. If it were to access the database of images that the system was going to analyze, it would have the ability to replace the image with its contradictory example. In this case, it could cause the AI to detect a non-existent tumor, or worse still, fail to detect a tumor that is actually present on the real image.
Here, then, we see the danger of evasion attacks, which can totally alter the result given by an AI system. This is why securing AI systems is a major digital challenge, and why AI cannot and will not be able to replace humans for some time to come. Security by design isn’t just good ; it is necessary.
Sources :
https://linc.cnil.fr/petite-taxonomie-des-attaques-des-systemes-dia
https://inria.hal.science/hal-03619035/document
https://www.riskinsight-wavestone.com/en/2023/06/attacking-ai-a-real-life-example/
https://www.ibm.com/docs/en/watsonx/saas?topic=atlas-evasion-attack