You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Moving to MBO rather than random search is straight forward and done in #8 already, but that also brings with it recosinderation of the tuning budget.
The current budget is 50 * n_hyperparams, which scales from 50 to 400 (for XGBAFT), or more likely 350 because nrounds is now tuned internally via early stopping.
For the inner resampling strategy:
3-fold CV -> 2 repeats of 3-fold CV?
For reasonably sized tasks and fast-ish learners this should only help, but for the large/slow cases this is going to cause us to run into timeouts.
For the outer resampling:
5-fold CV -> 2 repeats of 3-fold CV?
Scaling the outer resampling has the largest effect on runtime as tuning of course scales with that, and it affects the number of compute jobs on the cluster (one per outer iteration).
I'll need to so some reasonable runtime testing to get a grip here, but I'd like to avoid massively over- or undershooting what we could/should have done.
The text was updated successfully, but these errors were encountered:
For the inner resampling probably 2x3fold is okay, but for the outer resampling I could do what Sebastian did in the OpenML-CTR23 benchmark and use, say
10x3fold CV for tasks with N < 1000
2x3fold CV for tasks with 1000 <= N < 10000
3fold CV for tasks with N >= 10000
This is probably more efficient (both data- and compute-wise) than doing 2x3fold even for the largest task.
The original 5-fold CV for everything was not great in any case.
Moving to MBO rather than random search is straight forward and done in #8 already, but that also brings with it recosinderation of the tuning budget.
The current budget is
50 * n_hyperparams
, which scales from 50 to 400 (for XGBAFT), or more likely 350 becausenrounds
is now tuned internally via early stopping.For the inner resampling strategy:
For reasonably sized tasks and fast-ish learners this should only help, but for the large/slow cases this is going to cause us to run into timeouts.
For the outer resampling:
Scaling the outer resampling has the largest effect on runtime as tuning of course scales with that, and it affects the number of compute jobs on the cluster (one per outer iteration).
I'll need to so some reasonable runtime testing to get a grip here, but I'd like to avoid massively over- or undershooting what we could/should have done.
The text was updated successfully, but these errors were encountered: