BACKGROUND: I am trying to think of rational approaches to designing deep learning models for image classification. One thought is to quantify the complexity of image datasets and use that to inform model design. By the way, I know that rational model design is much more complex than just quantifying image complexity, but right now, I'm in the brainstorming phase.
In the below examples, I qualitatively describe the complexity of the images as a function of the number of channels, complexity of foreground object, complexity of the background, and the number of classes. Certainly, there can be many other factors, such as image dimensions or bytes/pixel. I think all images below are 28x28-pixels and 8-bits/pixel.
MNIST digits --> greyscale, simple objects belonging to 10 classes and a white background
MNIST digits corrupted --> greyscale, same as above but with added noise
MNIST fashion --> greyscale, more complex objects belonging to 10 classes and a white background
CIFAR-10 --> RGB, even more complex objects belonging to 10 classes and complex backgrounds
CIFAR-100 --> RGB, same as above but with 10X more classes
Based on the above one has a subjective sense that image datasets can be ordered as shown below based on complexity. It is reasonable to hypothesize then that the computer vison models should be progressively more complex as well (more neurons, more layers, more parameters, etc.).
MNIST digits > MNIST digits corrupted > MNIST fashion > CIFAR-10 > CIFAR-100
SPECIFIC QUESTION: Are there any existing quantitative measures of image complexity that capture these aspects of image datasets?
PRIOR RESEARCH: Various searches lead me to computational complexity and model complexity, which are not what I'm looking for.