Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Video Capture Demo on Windows PC #13

Open
Greendogo opened this issue Jul 20, 2022 · 10 comments
Open

Running Video Capture Demo on Windows PC #13

Greendogo opened this issue Jul 20, 2022 · 10 comments

Comments

@Greendogo
Copy link

Hey there, could you provide some instruction on setting up and running inference using a webcam on a Windows PC?

I'm getting stuck at the 'tvm' part.

@sushil-bharati
Copy link

Yes, this would be very helpful for testing the algorithm on various devices running Windows.
It would also be better if you could give more information on how one could enable/disable GPU device(s).
Thanks

@kevkid
Copy link

kevkid commented Jul 22, 2022

Same here, using the jetson demo, it states ModuleNotFoundError: No module named 'tvm'

@lmxyy
Copy link
Collaborator

lmxyy commented Jul 22, 2022

The nano_demo is tested on Jetson Nano with TVM support. If you are using Jetson Nano, you could follow this guide to install TVM. If you are using other devices, @MemorySlices could you adapt the TVM demo to a PyTorch model one for a more general demo?

@sushil-bharati
Copy link

@lmxyy Do you know if the models are CPU friendly? Do we "require" GPU to run them optimally?
I tried it in my CPU-only environment and it takes ~1.96 sec to process a frame (448x448x3). Am I doing sth wrong?

@lmxyy
Copy link
Collaborator

lmxyy commented Jul 22, 2022

The model should be CPU-friendly, as we also include some results of Raspberry Pi and it only takes ~100ms. But if you directly run the PyTorch model using CPU, I think your result is reasonable, as the CPU backend is not well-optimized.

@sushil-bharati
Copy link

Thank you, @lmxyy for the prompt response.
That explains why I am getting such a slow speed. I am indeed using model(s) using Pytorch's CPU backend settings.
So, is there a way that I can run the optimized model(s) on a CPU-only env, or is that out of scope?

@lmxyy
Copy link
Collaborator

lmxyy commented Jul 24, 2022

You could try TVM to optimize your CPU backend. But I think this will cost your much more time...

@kevkid
Copy link

kevkid commented Jul 28, 2022

Hi @sushil-bharati would it be possible to share how you got it to run using the pytorch cpu backend? I tried doing model(img) and got:

conv2d() received an invalid combination of arguments - got (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)

Thank you

@731076467
Copy link

Hello, I'd like to ask why I can't find this scheduler in the sentence from scheduler import warmup designer in dist_train file. What's the reason?

@MemorySlices
Copy link
Collaborator

MemorySlices commented Aug 2, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants