I have an RBM model which takes extremely long to train and evaluate because of the large number of free parameters and the large amount of input data. What would be the most efficient way of tuning its hyperparameters (batch size, number of hidden units, learning rate, momentum and weight decay)?
Asked
Active
Viewed 34 times