Hyperparameter tuning best practices. #7987
drisspg
started this conversation in
Help: Best practices
Replies: 1 comment 2 replies
-
What do you mean when you say that the guide at https://github.com/explosion/projects/tree/v3/integrations/wandb doesn't make use of sweeps? The I have used the wandb integration and set up hyperparameter tuning using wandb sweeps. I used the sweeps_using_config.py way and created a spacy project command that uses The only thing that I couldn't get to work was overrides. Does anyone know how to use |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have recently switched over an existing pipeline from spacy 2 to spacy 3. Previously I was training the components in the pipeline with a custom script based off the old documentation and not the spacy train cli option. Since switching I have migrated the training to use the cli. I am now taking a looking at hyper-parameter tuning the multi-label text classifier and the NER components of the pipeline. The most documentation I could find on a procedure for doing so was: https://github.com/explosion/projects/tree/v3/integrations/wandb. While this should work it doesn't seem to take advantage of some of the stated integrations: ray and wandb, both of which have powerful tuning frameworks. I was wondering if anyone has tried to integrate spacy hyperparamter tuning with either ray[tune] https://docs.ray.io/en/master/tune/index.html or W&B Sweeps: https://docs.wandb.ai/guides/sweeps? If so did you end up basically reimplementing spacy.training.loop.train() but with some code injection to register and monitor the appropriate metrics needed for either of these two frameworks.
Beta Was this translation helpful? Give feedback.
All reactions