-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Global Style Token Module #605
Conversation
Changed Files
|
For config file versioning:
|
Oh, since you made the field optional and defaulting to False, this may already all happen automatically for you. The test would be to load a model written with 0.2, and write it back with 0.3, if it works as expected it's all good. |
And let's bump to 0.3.0, without the |
When I run all tests, I get this error below or see attached log file:
|
I tried to train a FP , it starts but crashed before finishing the first epoch when I run as-is with this :
|
If I use |
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #605 +/- ##
==========================================
- Coverage 76.49% 76.34% -0.16%
==========================================
Files 47 47
Lines 3476 3483 +7
Branches 477 479 +2
==========================================
Hits 2659 2659
- Misses 714 721 +7
Partials 103 103 ☔ View full report in Codecov by Sentry. |
Needs the Copyright exceptions listed in LICENSE, otherwise looks good. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After rebuilding a new ENV from scratch and running a few more sanity tests . I was able to use use_global_style_token_module: true
with no more errors. Looks good .
PR Goal?
Add a Global Style Token Module à la https://arxiv.org/abs/1803.09017
Fixes?
#293
Feedback sought?
Sanity. Check training/synthesis with GST turned on
Priority?
medium
Tests added?
all for model code, so no added tests, but I still need to update existing tests since the commands and schemas have changed.
How to test?
Confidence?
Modelling - medium
I'm medium-confident for the modelling side since it has successfully trained models and seems to result in better models when training with noisy data. That said, I'm not including it by default due to the extra complications at inference time (you have to provide a reference audio)
Versioning - low
Here's an example of a new item in a config. Should I be doing something in the versioning to automatically add
use_global_style_token_module=False
if Version==1.0 and that key is not found? Should I also be bumping up the version of the config here?Version change?
minor version bump for FastSpeech2 and change to schemas
Related PRs?
EveryVoiceTTS/FastSpeech2_lightning#100