0

I have a number of input samples where: every input sample has both a label and a reference-map. This reference-map gives a score to each location of an input sample. The score defines how much this location of the input sample SHOULD contribute to the model’s decision making w.r.t. the correct label.

A gradient based saliency-map defines how much a location of an input sample ACTUALLY contributes to a model’s decision-making with regard to the correct label.

(Saliency-map: https://arxiv.org/pdf/1312.6034.pdf)

I would like introduce a penalty (based on the difference of the saliency-map and the reference-map) if the model does not focus on the areas that should be used to infer the label.

Hence there are two terms involved here: 1. A penalty based on the difference between the inferred-label and the actual-label (normal approach in deep-learning), 2: difference in saliency-map and reference-map.

I know how to calculate the saliency map, the question is more on how to construct an effective cost/loss function based on both the saliency-map as well as the inferred-label. (Currently I am using the categorical crossentropy without any extra penalty-term)

Does anyone if there has been any research done in this area (cost-function based on both saliency map and label), or some paper suggestions related to this approach?

Wtt
  • 1
  • 1
  • Can you please describe more in detail what these saliency maps are doing or are? And what do you mean by "calculate the salience map every epoch while the model is being trained"? Isn't this the default behaviour? What loss function are you currently using? What task are you trying to solve? Edit your post to clarify all these points. – nbro Apr 11 '22 at 11:03
  • 1
    @nbro I have edited the question according to your request. – Wtt Apr 11 '22 at 12:32

1 Answers1

0

You can use Dice loss between the saliency map and the reference map. You can then add this as a penalty to your regular loss with a weight parameter: CrossEntropyLoss + weight * DiceLoss

Tirtha
  • 11
  • 1