Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple command line #20

Open
njarecki opened this issue Jun 17, 2023 · 7 comments
Open

Simple command line #20

njarecki opened this issue Jun 17, 2023 · 7 comments

Comments

@njarecki
Copy link

Hey mangio amazing work!
Wondering if u can help.
I need simple command line access where I can pass a command with input name output name and a couple config variables like model pitch etc and get an output file written to directory with changed voice. Seems like you were the closest of anyone to that, but requires use of interactive CLI, whereas I want to do it through a Debian shell script on my Google cloud server. So all I need is to run it at the command line in one command. Is this anything you could implement? I do have a small budget for this if it's helpful. Working on an AI music project and we have this working with so-vits but rvc would be amazing. In any case, keep up the good work.!!!

@njarecki
Copy link
Author

njarecki commented Jun 18, 2023

Also, I've been experimenting. When I run batch infer on one tile with the standard rvc distribution, it takes about 14 seconds on a T4 to process my 30 second input file using dio. When I run it interactive with your command line, it only takes seven seconds. So I'm thinking there's some start up overhead that I am encountering every time when I use rvc batch infer whereas you only pay that cost once in your interactive. Is there any way I can get around this so I can issue a shell command (as part of a script) for your version so it gets an input file and have it have the same speed as your interactive? Does it make any sense ? Some way we can only pay that overhead once or persist in whatever makes it faster. This is important to us for a web server we are building that does auto conversion of users voice files.

so I have a budget for it it that's helpful at all! Right now, we post the file to our own web server and use a shell script to infer it. So that's the workflow we hope to keep but with your speedier processing! Best. Nick.

@njarecki
Copy link
Author

hi, just a bump on this issue. hope all is well.

@stu00608
Copy link

Hello! Can you check this issue and my pr? I added a simple command line in my fork. :D
#31 (comment)
#41

@njarecki
Copy link
Author

njarecki commented Jul 19, 2023 via email

@stu00608
Copy link

Hi! This is great. But there was a penalty paid on startup time. Is there a way to have this running as a daemon or background process so we don’t pay the startup overhead? We are using it for a simple flask server to process incoming api requests --NicholasOn Jul 19, 2023, at 9:27 AM, Shen Yi Chen @.> wrote: Hello! Can you check this issue and my pr? I added a simple command line in my fork. :D #31 (comment) #41 —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.>

Hi njarecki, if you want to use it via API (python or js), you can run the gradio web app, and find the "Use via API" on the bottom footer, there you can use the web app in python. This way unless the web app is down, it's keep running in background!
Just make sure you figure out what's the fn_index for each step.

@njarecki
Copy link
Author

njarecki commented Jul 20, 2023 via email

@stu00608
Copy link

I'm not sure about the increased latency, are you run it with multiple threads?
And the fn_index is the indicator I found in gradio_client API page, you will see every API all calling the same function, but with different fn_index

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants