This directory contains example scripts for running a hyperparameter sweep with Weights & Biases.
config.yaml
contains just one possible hyparparameter grid, designed for random search over an attentive LSTM; edit that file directly to build a hyperparameter grid appropriate for your problem.- Consider also running Bayesian search (
method: bayes
) instead of random search; see here for more information. - When running
sweep.py
you must provide the--entity
,--project
and--sweep_id
. It can otherwise be called with the same arguments asyoyodyne-train
where any hyperparameters in the sweep config will override command-line hyperparameter arguments. - By default
random
andbayes
search run indefinitely, until they are killed. To specify a fixed number of samples, provide the--count
argument tosweep.py
.
For more information about W&B sweeps, read here.
Execute the following to create and run the sweep; here ${ENTITY}
and
${PROJECT}
are assumed to be pre-specified environmental variables.
# Creates a sweep; save the sweep ID as ${SWEEP_ID} for later.
wandb sweep --entity "${ENTITY}" --project "${PROJECT}" config.yaml
# Runs the sweep itself.
./sweep.py --entity "${ENTITY}" --project "${PROJECT}" \
--sweep_id "${SWEEP_ID}" --count "${COUNT}" ...
Then, one can retrieve the results as follows:
-
Visit the following URL:
https://wandb.ai/${ENTITY}/${PROJECT}/sweeps/${SWEEP_ID}
-
Switch to "table view" by either clicking on the spreadsheet icon in the top left or typing Ctrl+J.
-
Click on the downward arrow link, select "CSV Export", then click "Save as CSV".
Alternatively, one can use best_hyperparameters.py
to retrieve the hyperparameters of the best run, formatted as CLI flags:
./best_hyperparameters.py --entity "${ENTITY}" --project "${PROJECT}" \
--sweep_id "${SWEEP_ID}"