1

Suppose I have a private set of images containing some objects.

How do i

  1. Make it very hard for the neural networks such as ImageNet to recognize these objects, while allowing humans to do it at the same time?

  2. Suppose I label these private images - a picture of a cat with a label "cat" - how do I make it hard for the attacker to train his neural network on my labels? Is it possible to somehow fool a neural network so that they couldn't easily train it to recognize it?

Like random transforms etc, so that they couldn't use a neural network to recognize these objects, or even train it on my dataset if they had labels.

Mithical
  • 2,885
  • 5
  • 27
  • 39
  • 1
    Just curious, but what's the purpose of this set of images? Are you trying to make your own set of CAPTCHA images (or whatever the image versions of the anti-robot tests are called)? – Varun Vejalla Aug 10 '20 at 00:04

1 Answers1

2

If the model is trained and held constant, then there are so-called adversarial attacks to modify images such that the model classifies them incorrectly (see Attacking Machine Learning with Adversarial Examples).

However, if you want to make images that are untrainable, you are probably out of luck. Deep neural networks can learn to recognize even random images with random labels (see Understanding deep learning requires rethinking generalization), though if there's no rhyme or reason to the randomness, they won't generalize in meaningful ways.

alltom
  • 176
  • 3