I am a medical doctor working on methodological aspects of health-oriented ML. Reproducibility, replicability, generalisability are critical in this area. Among many questions, some are raised by adversarial attacks (AA).
My question is to be considered from a literature review point of view: suppose I want to check an algorithm from an AA point of view:
- is there a systematic methodology approach to be used, relating format of the data, type of models, and AA? Conceptually, is there a taxonomy of AA? If so, practically, are some AA considered as gold standards?