I have not found any neural network training methods that recommend manually intervening in the training process while it is happening. However, some experiments I've done seem to show this can be an effective way to speed up training.
For example, once the network has converged on a suboptimal solution, I can have the network focus on particular sections of the training data for awhile, and then switch back to the entire training data to get the network out of the local optima. Gradient descent alone would not be able to do this because while I have the network focus on a subset of the training data the error becomes very large, and then after I release the focus the error drops below the previous local optima value. Gradient descent would not choose to explore an area of the solution space with a significantly higher error than the current region.
Has there been any research into whether these kinds of manual intervention methods can improve over purely automated training?