Is there a measure of model complexity?
2 Answers
Yes. There are at least 2 measures of model complexity studied and used in learning theory: VC dimension and Rademacher complexity. If you're new to learning theory, you could take a look at this answer.
(Note: your question is not an exact duplicate of this, but the VC dimension is not specific to neural networks).

- 39,006
- 12
- 98
- 176
There is no "unit" to measure model complexity. However, you can take into account some key factors:
- input dimension
- output dimension
- model length (number of layers): Model length describes the number of transformations on the data from input to output. A higher number of layers means more possible transformations and thus the possibility to model more complex dependencies.
- model width (number of neurons per layer): Model width describes the dimensionality of the data going through a single layer. A higher number of neurons means higher information retention per layer.
- activation functions: Without any activation functions, even the largest model can only learn linear dependencies. Activation functions introduce non-linearity into the model and make complex dependency modeling possible.
Based on those factors, you can get a good estimate of how complex your model is. Note that a more complex model does not equal better performance per-se, they require careful fine-tuning.
To get a good estimate if your model is large enough to model your data transformations, you can train it on very little data with batchsize=1
and no regularisation. If it completely learns the given samples, it is likely to be fitting for your entire dataset aswell.

- 121
- 3
-
The answer address neural network only, not model in general. – lpounng Jun 16 '23 at 02:08