I think Cross-Validation serves a completely different purpose.
From your post, it looks like you think we would use CV to get a better estimate of the parameters of our model (i.e. the model parameters after cross validation are closer to the parameters of the test data).
In fact, we use CV to get an estimate of generalization error while keeping our test set outside the training process. That is, we use it to answer the question "What is the size of the difference between my training and testing performance likely to be?". If you have an estimate of this that you are confident in, you can be confident that when you deploy a model to your customers, the model will actually work as you expect.
If you're only going to build a single model, then you don't need cross validation. You just train the model on the training data, and test it on the test data. Then you have an unbiased estimate of generalization error.
However, we might want to try out many different kinds of models, and many different parameters (broadly, we might want to do hyperparameter tuning). To do this, we need to understand how generalization error changes as we change our hyperparameters, and then use this information to pick hyperparameter values that we think will minimize the actual error when we deploy the model.
You could do this by training different models on the training set, and then testing them on the test set, recording the difference in model performance on the two sets. If you use this as a basis to pick a model though, you have effectively pulled the test set inside your training process (model parameters were implicitly selected using the test set, since you picked the parameters with the lowest test error). This bias will make your true generalization error much larger than what you observed.
As a stop gap, you could split your training set into a 'real training' set and a validation set. You could train models on the 'real training' set, and then measure their performance on the validation set. The difference would be a biased (but hopefully still useful) estimate of generalization error. You could then test against the test set just once (at the end) to get an unbiased estimate that you can use to decide whether or not to deploy the model.
A better workflow is to use CV on the training set to get an estimate of generalization error during hyperparameter optimization. You get K samples for k-fold cross-validation, so you can do statistical testing to see whether one model truly has better generalization error than another, or whether it's just a fluke. This decreases the degree of bias in your estimates of generalization error. Then, once you've completed hyperparameter optimization, you can run your final model against the test set once to obtain a truly unbiased estimate of your final generalization error.