Labels:
deep-learning object-detection adversarial-attack

Description

In this work we have investigated how a deep learning model can be fooled into misdetecting large objects like an aircraft by placing a relatively small sticker on top of the object.

Deep learning has become increasingly popular in recent years due to its unparalleled performance on especially object detection in images. A deep learning detector is created by leveraging many training examples from which the distinctive features of a certain object are automatically learned. The detector is trained to respond to specific pixel patterns on the object. Recently, it has been discovered that small perturbations of the pixel values can force the detector into misdetecting objects. Similarly, a sticker put on top of an object in the scene can produce a similar perturbation of the pixel values in the recording. We can compute the most effective pattern for this sticker by optimizing for maximal suppression of object detections. An important requirement for this optimization is that the detector model is known. This approach has been introduced last year for the suppression of person detections. We have applied and extended this approach for fooling an object detector in a drone surveillance scenario, where we investigated different sticker positions, sizes and configurations for optimal camouflage.

Contact

  • Ajaya Adhikari, Data Scientist, TNO, e-mail: ajaya.adhikari@tno.nl
  • Richard den Hollander, Computer vision Scientist , TNO, e-mail: richard.denhollander@tno.nl