3

I have a deep learning network that outputs grayscale image reconstructions. In addition to good reconstruction performance (measured through mean squared error or some other measure like psnr), I want to encourage these outputs to be sparse through a regularization term in the loss function.

One way to do this is to add an L1 regularization term that penalizes the sum of the absolute value of pixel intensities. While this is a good start, is there any penalization that take adjacency and spatial contiguity into account? It doesn't have to be a commonly used constraint/regularization term, but even potential concepts or papers that go in this direction would be extremely helpful. In natural images, sparse pixels tend to form regions or patches as opposed to being dispersed or scattered. Are there ways to encourage regions of contiguous pixels to be sparse as opposed to individual pixels?

Jane Sully
  • 143
  • 3
  • I think the thing you are looking for is the L1 norm of first derivative or maybe second, not sure. The derivative is defined in some way for discrete things like pixels and if you enforce it to be almost 0 everywhere it will make sure that regions are clouted together. –  Sep 18 '20 at 01:35
  • @DuttaA Interesting idea! Do you have any examples or links that provide more info? Also, is this similar to total variation regularization (which minimizes differences in adjacent pixels)? – Jane Sully Sep 18 '20 at 02:04
  • Yes probably...but this probably will exist in any Dgital Iamge Processing book. Its an old idea, probably to soften images I guess. I don't know the exact details, but i think signal processing.SE might help. –  Sep 18 '20 at 02:52
  • Does it matter which pixels (or regions) become zero or can they be in random positions? What are you trying to achieve ultimately, i.e. what kind of images are you trying to generate (or reconstruct)? Also, when you say "sparse", do you mean that the pixels will be zero (black)? – nbro Sep 18 '20 at 09:41

0 Answers0