-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Small memory leak when using only src pad #320
Comments
==1800== 59,360 (14,840 direct, 44,520 indirect) bytes in 371 blocks are definitely lost in loss record 3,065 of 3,070 |
Hi @dsteger, I'm not able to reproduce this issue using TensorFlow backend. Are you using the latest 0.10 release? |
Hi @rrcarlosrodriguez Thank you for trying to reproduce this. The memory leak is quite slow and takes some time to notice. Did you try running the pipeline with valgrind? "valgrind --tool=memcheck --leak-check=full gst-launch-1.0 mypipeline". I'm curious if you have any loss reported. I'm using vanilla v0.10 gst-inference and an empty backend based on v8.0 which just creates prediction data and returns it. |
There seems to be a small memory leak of about ~100KB/s when running the pipeline below. I'm currently looking into this but thought I would post the issue ahead of discovery. The one delta in this pipeline is our custom backend but... we didn't notice a leak with 0.6 release plugin.
gst-launch-1.0 videotestsrc ! \
video/x-raw, width=1920, height=1080, format=RGB, framerate=60/1 ! tee name=t1 \
t1. ! queue max-size-buffers=3 leaky=no ! videoscale ! videoconvert ! \
video/x-raw, width=416, height=416, format=RGBx, framerate=60/1 ! \
net1.sink_model \
tinyyolov2 name=net1 model-location=$MODEL_LOCATION backend=$BACKEND \
net1.src_model ! videoconvert ! perf ! queue ! fakesink -v
The text was updated successfully, but these errors were encountered: