ENTERTAINMENT FEATURED WIERD

The attack on the model of machine learning knocks robots / SurprizingFacts


A set of experimental images with art stickers at different distances and from different angles: (a) 5 feet, 0 degrees; (B) 5 '15 °; (C) 10 '0 °; (D) 10 '30 °; (E) 40 '0 °. Deception works at any distance and at any angle: instead of the stop sign, the machine learning system sees the sign "Speed ​​limit 45 miles"

While some scientists improve machine learning systems, other scientists improve methods Deception of these systems.

As you know, small purposeful changes in the picture are able to "break" the machine learning system, so that it recognizes a completely different image. Such "Trojan" pictures are called "adversarial examples" and represent one of the known limitations of in-depth training.

To compose a controversial example, it is necessary to maximize the activation, for example, of a certain filter of a convolutional neural network. Ivan Evtimov from the University of Washington, along with colleagues from the University of California at Berkeley, the University of Michigan and the University of New York at Stony Brook, have developed a new attack algorithm-robust Physical Perturbations or RP 2 . He very effectively shoots down the vision of unmanned vehicles, robots, multicopters and any other robotic systems that try to navigate in the surrounding space.

Unlike previous studies, the authors concentrated on changing the objects themselves, rather than the background. The task of the researchers was to find the minimum possible delta that would knock down the classifier of the machine learning system, which was trained on a data set with images of LISA road signs. The authors independently made a series of photos of road signs on the street under different conditions (distance, angles, illumination) and supplemented the LISA data set for training.

After calculating such a delta, a mask was identified – such a place (or several places) in the image , Which most reliably causes perturbation in the machine learning system (machine vision). A number of experiments were conducted to verify the results. In general, the experiments were carried out on a stop signal (the "STOP" sign), which the researchers made by several harmless manipulations turned for machine vision into a sign "SPEED LIMIT 45". The developed technique can be used on any other signs. The authors then tested it on the sign of the turn.

The scientific team developed two versions of the attack on the machine vision systems that recognize road signs. The first attack is a small, unobtrusive change across the entire area of ​​the sign. With the help of the optimizer Adam they managed to minimize the mask to create separate targeted adversarial examples aimed at specific road signs. In this case, you can deceive the machine learning system with minimal changes in the picture, and people generally will not notice anything. The effectiveness of this type of attack was tested on printed posters with small changes (first the researchers were convinced that the machine vision system successfully recognizes the posters unchanged).

The second type of attack is camouflage. Here the system imitates either acts of vandalism, or art graffiti, so that the system does not interfere with the lives of surrounding people. Thus, a person-driver at the wheel immediately sees a sign of turning left or a stop signal, and the robot will see a completely different sign. The effectiveness of this type of attack was checked on real road signs, which were pasted with stickers. Camouflage-graffiti consisted of stickers in the form of words LOVE and HATE, and camouflage of the abstract art type – from four rectangular black and white stickers.

The results of the experiment are shown in the table. In all cases, the efficiency of cheating the classifier of machine learning is shown, which recognizes the modified sign "STOP" as the sign "SPEED LIMIT 45". The distance is indicated in feet, the angle of rotation is in degrees. The second column shows the second class, which is seen by the machine learning system in a modified sign. For example, from a distance of 5 feet (152.4 cm), camouflage of the abstract art type at an angle of 0 ° gives such results of the recognition of the sign "STOP": with confidence 64% it is recognized as the sign "SPEED LIMIT 45", and with confidence 11% – as The sign "Lane Ends".


Legend: SL45 = Speed ​​Limit 45, STP = Stop, YLD = Yield, ADL = Added Lane, SA = Signal Ahead , LE = Lane Ends

Perhaps such a system (with appropriate changes) will be needed by mankind in the future, and now it can be used to test imperfect machine learning systems and computer

The scientific work was published on July 27, 2017 on the website of the preprints arXiv.org (arXiv: 1707.08945).