There are a few ways to regularise a neural network, for example dropout or the L1. Now, both these methods, and possibly most other regularisation methods, tend to remove from, or simplify the neural network. The Dropout deactivates nodes and the L1 shrinks the weights of the model, and so on.
The main argument in favour of regularising a neural network is that by simplifying the model you are forcing it to learn more general functions and thus making the neural network more robust to overfitting or noisy input.
Once you have a model trained with, and without, regularisation, it is possible to compare their performance by calculating the error metrics on their outputs. This will prove whether the regularised model performs better than the standard model or not.
However, considering that the regularised model achieved better performance on its error metrics, how to prove that the weights of the regularised model have less variance (simpler) than the standard neural network?