You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all,
I want to know If we write the inference plugin for deep learning models with python or c++, what's difference in the speed up?
All we know, the c/c++ language has more speed than python, and this is right when we want to implement the algorithms from scratch, because we want to impelement inference codes of model that has c/c++ backend but python interface, Writing python gstreamer plugin but running the main codes with c/c++ backend(like tensorflow), How different are their speeds(c/c++ plugin and python plugin)?
The text was updated successfully, but these errors were encountered:
Hi @PythonImageDeveloper. I do not have concrete numbers to give you, but I agree with your statement: if the algorithms are written in C/C++ underneath, having a python interface shouldn't be a significant bottleneck. This will be true as long as you can transparently share memory between C/C++ and Python. As a reference, we have some elements written in Python that perform inference underneath (PyTorch in our case) and it works fine.
In the specific case of GstInference, we prefer to stick to a lower level for portability purposes. It is easy for a user to write an app in Python, Rust, C#, if we are in C/C++. It is not true the other way around.
Hi all,
I want to know If we write the inference plugin for deep learning models with python or c++, what's difference in the speed up?
All we know, the c/c++ language has more speed than python, and this is right when we want to implement the algorithms from scratch, because we want to impelement inference codes of model that has c/c++ backend but python interface, Writing python gstreamer plugin but running the main codes with c/c++ backend(like tensorflow), How different are their speeds(c/c++ plugin and python plugin)?
The text was updated successfully, but these errors were encountered: