1

While I have limited resource usually to train my machine learning models, I often find that my hyperparameter optimization procedure is not necessary using all my GPU and CPU, and that is because the results also depend on the batch size in my experience.

If you find in your project that a low batch size is necessary, how do you scale your project? In a multi-GPU scenario, I could imagine running different hyperparameter settings on different GPUs, but what other options are out there?

DukeZhou
  • 6,237
  • 5
  • 25
  • 53
boomkin
  • 119
  • 1
  • Hi and welcome to this community. It is not clear what you're asking. _If you find in your project that a low batch size is necessary, how do you scale your project?_ What do you mean by "scale your project"? What is the relation between low batch size and scalability? IMHO, it is not very clear, in general, what you're asking. Are asking if there are any alternatives to hyper-parameter optimisation that runs on different GPUs? If yes, I could simply say: run the hyper-parameter optimisation using only one GPU. However, this is not what you're looking for, so be more specific and precise. – nbro Jul 26 '19 at 20:35

0 Answers0