Problem
and algorithms ignore the torch.set_default_dtype
#51
Milestone
Problem
and algorithms ignore the torch.set_default_dtype
#51
I just ran into a problem when trying to run problems with double precision. I thought that defining
torch.set_default_dtype(torch.float64)
would be enough for evotorch to define all the tensors internally to be of double precision, but this is not the case.Consider the following simple example of running CMA-ES for a single step:
If we want it to be
float64
, we have to specify it in the definition ofproblem
. Indeed, running this withgets us a best candidate with double precision. Why do we have to specify the type twice? Wouldn't we want
Problem
to inherit the default float dtype?The text was updated successfully, but these errors were encountered: