In the Deep Learning book by Goodfellow et al., section 11.4.5 (p. 438), the following claims can be found:
Currently, we cannot unambiguously recommend Bayesian hyperparameter optimization as an established tool for achieving better deep learning results or for obtaining those results with less effort. Bayesian hyperparameter optimization sometimes performs comparably to human experts, sometimes better, but fails catastrophically on other problems. It may be worth trying to see if it works on a particular problem but is not yet sufficiently mature or reliable
Personally, I never used Bayesian hyperparameter optimization. I prefer the simplicity of grid search and random search.
As a first approximation, I'm considering easy AI tasks, such as multi-class classification problems with DNNs and CNNs.
In which cases should I take it into consideration, is it worth it?