Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference speed comparison #7

Open
sujeendran opened this issue Jul 26, 2019 · 1 comment
Open

Inference speed comparison #7

sujeendran opened this issue Jul 26, 2019 · 1 comment
Labels
question Further information is requested

Comments

@sujeendran
Copy link

Hi! Could you share some details about the inference speed compared to Griffin-Lim/WaveNet/WaveRNN?

@bshall
Copy link
Owner

bshall commented Jul 29, 2019

Hi @sujeendran,

Sorry about the delay. With a GeForce GTX 1080 Ti I'm getting around 3700 samples a second (so almost 0.25x real-time with 16kHz audio) and on an Intel Core i7-8700K CPU @ 3.70GHz I'm getting about 1700 samples a second.

So definitely faster than vanilla WaveNet but not quite real-time. However, since there is only a single forward GRU on the autoregressive path I'm sure you could get better than real-time with some engineering e.g. sparsifying the GRU, designing a custom cuda kernel, etc.

@bshall bshall added the question Further information is requested label Aug 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants