I have a deep learning network that outputs grayscale image reconstructions. In addition to good reconstruction performance (measured through mean squared error or some other measure like psnr), I want to encourage these outputs to be sparse through a regularization term in the loss function.
One way to do this is to add an L1 regularization term that penalizes the sum of the absolute value of pixel intensities. While this is a good start, is there any penalization that take adjacency and spatial contiguity into account? It doesn't have to be a commonly used constraint/regularization term, but even potential concepts or papers that go in this direction would be extremely helpful. In natural images, sparse pixels tend to form regions or patches as opposed to being dispersed or scattered. Are there ways to encourage regions of contiguous pixels to be sparse as opposed to individual pixels?