You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the eval() function in the acquisition function class involves gradient backward, because line 102 of hebo/acq_optimizers/evolution_optimizer.py is ''with torch.no_grad():'', the gradient cannot be passed back. delete this line solves the problem. Why add this line ? Will deleting this line affect the final output result ?
background: I am using the genetic algrithm to find good hyperparameters for neural network training using pytorch.
The text was updated successfully, but these errors were encountered:
Okay, I got it. Thank you for your response, and I understand your point. However, selecting appropriate hyperparameters for neural networks is a common optimization problem. If this function is not designed to handle that, I would appreciate it if this could be clearly mentioned in the documentation to avoid any misunderstandings. This is particularly important because the HEBO usage guide suggests that it is capable of optimizing neural network hyperparameters.
HEBO won the NeurIPS 2020 Black-Box Optimisation Challenge for Machine Learning so it is of course compatible with neural net hyperparameter optimization. You need gradient when you evaluate the black-box, i.e. when you train your model with the suggested hyper parameters and get a validation loss, but you don’t need to use gradient when HEBO internally optimizes its acquisition function to find the next set of hyperparameters to try.
When the eval() function in the acquisition function class involves gradient backward, because line 102 of hebo/acq_optimizers/evolution_optimizer.py is ''with torch.no_grad():'', the gradient cannot be passed back. delete this line solves the problem. Why add this line ? Will deleting this line affect the final output result ?
background: I am using the genetic algrithm to find good hyperparameters for neural network training using pytorch.
The text was updated successfully, but these errors were encountered: