Suppose I have a private set of images containing some objects.
How do i
Make it very hard for the neural networks such as ImageNet to recognize these objects, while allowing humans to do it at the same time?
Suppose I label these private images - a picture of a cat with a label "cat" - how do I make it hard for the attacker to train his neural network on my labels? Is it possible to somehow fool a neural network so that they couldn't easily train it to recognize it?
Like random transforms etc, so that they couldn't use a neural network to recognize these objects, or even train it on my dataset if they had labels.