Replies: 2 comments 2 replies
-
Example, but with Voice Message, not YouTube and audio separation. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I don't know what to do any more, would calling torch.cuda.empty_cache() everywhere help? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have Telegram bot, that receives link to YouTube audio and changing acapella voice to model using SVC, then send audio to user.
I do inferencing like this:
After first inference, SVC using GPU like 1.5-2 GB, and i do not have enough GPU memory (I have only 4GB) for audio separation (acapella, instrumental), audio separation just getting very slower, than with full GPU memory.
Maybe i do not understand something? Because i see, that SVC removing model/cache after inference, but GPU is 1.5-2 GB anyway. Thanks for any help.
Beta Was this translation helpful? Give feedback.
All reactions