Description

In this work we have investigated how a deep learning model can be fooled into misdetecting large objects like an aircraft by placing a relatively small sticker on top of the object.

Problem Context

Deep learning has become increasingly popular in recent years due to its unparalleled performance on especially object detection in images. A deep learning detector is created by leveraging many training examples from which the distinctive features of a certain object are automatically learned. These deep learning detectors can be used in various contexts. In defense, deep learning-based object detectors are used for detecting military assets on the ground via drone surveillance footage. Traditionally, military assets are hidden from sight through camouflage, for example by using camouflage nets. However, large assets like planes or vessels are difficult to conceal by means of traditional camouflage nets. Therefore, an alternative method for camouflaging military assets in drone surveillance footage is required.

Solution

The opportunity for an alternative method for camouflage presents itself in the misleading of automatic object detectors. Deep learning detectors are trained to respond to specific pixel patterns on objects. It has recently been discovered, however, that small perturbations of the pixel values can force the detector into misdetecting objects. Specifically, it has been found that patches can be put on top of objects to produce these perturbations of pixel values. Through this method, patches can be used to camouflage objects from deep learning detectors. It has already been shown that such patch-based adversarial attacks can be used to suppress the detection of people. In this project, it is tested whether patches can also be used to camouflage larger military assets from surveillance drone footage, such as airplanes. To test this, patches are added to surveillance images of large military assets. Patches are varied in position, size, saliency, as well as the number of patches placed for optimal camouflage. The most effective pattern for the patches can be computed by optimizing for the maximal suppression of object detection. An important requirement for this optimization is that the deep learning detector model is known.

Results

Our results show that patches can prevent deep learning detectors from detecting large military assets even when the patches cover only a small part of the asset. In fact, planes were already camouflaged from the automatic detectors when they were covered with a patch as small as roughly 10% of the size of the plane. Although more research is required to validate the approach (such as field tests), adversarial patch attacks form a realistic alternative to traditional camouflage activities and should thus be considered in the automated analysis of aerial surveillance imagery.

Contact

  • Ajaya Adhikari, Data Scientist, TNO, e-mail: ajaya.adhikari@tno.nl
  • Richard den Hollander, Computer vision Scientist , TNO, e-mail: richard.denhollander@tno.nl