2

I am a medical doctor working on methodological aspects of health-oriented ML. Reproducibility, replicability, generalisability are critical in this area. Among many questions, some are raised by adversarial attacks (AA).

My question is to be considered from a literature review point of view: suppose I want to check an algorithm from an AA point of view:

  • is there a systematic methodology approach to be used, relating format of the data, type of models, and AA? Conceptually, is there a taxonomy of AA? If so, practically, are some AA considered as gold standards?
nbro
  • 39,006
  • 12
  • 98
  • 176

1 Answers1

2

There are already a couple of papers in the literature that attempt to provide a taxonomy and survey of adversarial attacks. I will just list the two that I think are reliable enough that you can probably use as a reference.

Needless to say, there are different adversarial attacks, such as the Fast Gradient Signed Method (FGSM), and they can be classified into different categories, such as evasion attacks or poising attacks. You can find a lot more info in these cited papers.

nbro
  • 39,006
  • 12
  • 98
  • 176