-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High passive memory usage (leak?) since version 4.3 (Upgrade to Ubuntu 22.04 and Transmission 3.00) #2469
Comments
+1 for me, 28 torrents, container has only been up for 2 days and is currently using 2.97 GB out of my system's 4.00 GB RAM |
I also have this problem. |
Same problem. Is there a way to update the version of transmission used, please? |
Use the beta branch with Transmission 4.0.0 beta-2 - this works for me. And this command to pull: |
Thanks a lot @ameinild , |
unfortunately transmission 4.00 is banned on about a third of the private trackers that I use so this isn't an option but thank you for the suggestion. |
Yeah, I know it's an issue that Transmission Beta is banned on private trackers. In this case, I would suggest instead reverting to an earlier Docker image, based on Ubuntu 20.04 and Transmission 2.9X, like 4.2 or 4,1. Or possibly the Dev branch - don't know if this fixes the issue yet? But there should be other possibilities. 😎 There are actually several Docker tags to try out: |
@ameinild I'm still getting the memory issue :( I'm now at 5.43 GB after running 2 days with one single torrent :/ |
I have no idea - the Beta version works perfectly for me on Ubuntu. You could try rolling back to an earlier release. Else, wait for a stable version of Transmission where they fix the memory leaks that are clearly present. 😬 |
Weirdly, this morning RAM is back to 51MB. Go figure... 🤔
|
Thanks for sharing! I wonder whether we should move this convo to an issue
upstream (i.e. on transmission repo)?
|
It's strange. It seems the memory leak issue is hitting randomly for different versions of Transmission and on different OS'es. On Ubuntu 22.04 I had no issue with Trans 2.94, huge memory leak an Trans 3.00, and no problem again on Trans 4.0-beta. This would make it very difficult to troubleshoot, I guess.. 😬 |
I am also using Ubuntu 22.04. After it quickly jumped back up to over 20GB, I instituted a memory limit through Portainer and that has helped. Now it doesn't go above whatever limit I set. I am not sure if it will affect the functionality of the container though. Guess we'll see. I also switched back from beta to latest since that didn't fix it anyway and I would rather run a stable version. |
I'm running Linux Mint 20.3 with v4.3.2 of the container. I haven't tried alternate versions of Transmission, but I became aware of this issue when I noticed high swap usage on the host. After running for about a week with 17 torrents, the container was using 31.5GB of memory and another 6GB of swap. I've been using a limit through Portainer for the last several days without any major issues. I have seen its status listed as 'unhealthy' a couple times, but it resumed running normally after restarting via Portainer. |
Same issue here. Im not sure what changed, it started doing this recently. The image im using was pulled 2 months ago. Either i didnt notice it until now, or something changed... |
I was hoping that Transmission 4.0.0 would be our way out of this, troubled to hear that some is still experiencing issues with it 😞 The release is now out 🎉 https://github.com/transmission/transmission/releases/tag/4.0.0 🎉 and already starting to get whitelisted on private trackers. But if there's still memory issues then we might have to consider doing something else. If this could get fixed upstream, or we could narrow it down to a server OS and then report it then that would be the best long term solution I guess. If not the only thing that comes to mind is changing the distro of the base image to see if that can have an effect. Before we start automatically restarting Transmission within the image or other hackery 😬 The |
A couple weeks ago I noticed the web interface getting unresponsive when the container was listed as unhealthy and set up a cron job to restart the container every 4 hours. Initially I tried longer intervals, but the container would go unhealthy as soon as the 1GB memory limit was reached and essentially stop working. With a 4 hour restart window I'm able to catch the container before it goes unresponsive, and it's been working great. If it would be helpful, I can adjust the restart interval and post container logs. |
The latest version with Transmission 4.0.0 release still works good for me on Ubuntu 22.04 server. 👍 |
Saw this thread on transmission itself about high memory usage, even with 4.0; may be pertinent: transmission/transmission#4786 |
Very curious to follow that thread @theythem9973. Hopefully they'll find something that can help us here as well 🤞 But this issue was reported here when upgrading to 4.3 of this image which is using a much older build of Transmission, and we also previously ran v3.00 of Transmission under alpine without issues (tag 3.7.1). So, I'm starting to doubt the choice of Ubuntu 22.04 as a stable option 😞 We moved from alpine for a reason as well, so I'm not sure if we want to go back there or go Debian if Ubuntu doesn't pan out. Before we change the base image again I'm just curious if there's a possibility to solve this by rolling forward instead. |
In addition to the |
I see that 4.0.1 is released with possible fix in transmission. I’ll make a new build with this on the beta branch |
The latest beta with Transmission 4.0.1 is (still) working fine for me on Ubuntu 22.04 and Docker 23.0.1. 👍 |
I’m running latest transmission 4.0.1 since last week on two containers with 5/10 gb limits, larger container is fairly constant with 50ish seeding torrents, always around 7-8gb used die last 4 months of stats I have(since older versions as well). Second one is main dl container, carrying between 0-10 ish torrents and seldomly goes above 2gb. |
Same problem here with the 4.0.1 version. After restarting the container, everything back to normal. I have modified my docker-compose file to set a limit of RAM just in case. |
Same for me on Synology DSM 7.1.1 and latest Docker image. Always maxing out the available memory. |
Been running Focal for almost 3 weeks and it's looking good. Below can see my memory settle down halfway through week 9. However bizarrely the container thinks it's chewing up 10GB of mem when the total system is barely using 2GB. Maybe it's all cached and I've just never looked too closely before. Anyway, Focal looks good for me on Ubuntu 22.04 and Docker 23.0.1. |
Unfortunately, I don't have that info - not very useful I know! I will report back later once it's been running for a few hours.
|
|
I'm now running |
After a day, the latest branch is running around 600MB, so not crazy. For some comparison, I have a version 3.3 container at 350MB. The same seeds etc. I assume the difference is due to 3.3 running alpine. |
I've been using a focal for a couple weeks now. Previously using "latest" (don't know the exact version). focal has been great - it's sitting pretty at ~700 MB when previously it'd grow to upwards of 18 GB until it hits swap / crashes. |
Glad to hear it @theythem9973 👌 We're on to something 😄 Are you also up for testing with the newest 5.x release? The |
Latest version is drastically reducing memory usage. I'm running at 88Mb with 2 days running... Case closed imo |
FWIW, I am running latest and still have this issue. Hitting my 4 GB limit I have set through Portainer. |
`latest` is a bit of a floating reference. Do you have logs showing the
revision. Or double checked that you've pulled lately? Can you change for
tag 5.0.2 just to be sure?
fre. 28. apr. 2023, 17:54 skrev Joe Eklund ***@***.***>:
… FWIW, I am running latest and still have this issue. Hitting my 4 GB limit
I have set through Portainer.
—
Reply to this email directly, view it on GitHub
<#2469 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAH5ODXGNPAB5W3NZZVRCS3XDPR2DANCNFSM6AAAAAATA4O64U>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I am using the tag: I swapped to EDIT: I looked at the logs and see |
Sounds like you had the correct image all along then. So it will probably
use a lot of memory now as well. What OS and version are you running, and
docker version?
fre. 28. apr. 2023, 18:14 skrev Joe Eklund ***@***.***>:
… I am using the tag: haugene/transmission-openvpn:latest. I just tried to
pull again and nothing changed. Portainer is reporting that the image is up
to date. I poked around the logs but didn't see anything that jumped out at
me and said that I was using latest. But I am fairly confident I am using
https://hub.docker.com/layers/haugene/transmission-openvpn/latest/images/sha256-df0b4b4c640004ff48103d8405d0e26b42b0d3631e35399d9f9ebdde03b6837e,
given that Portainer says what the container is using is the most up to
date.
I swapped to 5.0.2 and now Portainer has the same image as being tagged
for both 5.0.2 and latest, so it's the same image whether I change to
5.0.2 or use latest. I will leave it as 5.0.2 and monitor, but I suspect
it will exhibit the same behavior since the actual image being used didn't
change. Right now it's at ~600MB and every few seconds it is going up by
~30 MB.
—
Reply to this email directly, view it on GitHub
<#2469 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAH5ODXOE6V6K3GGLJK4PFDXDPUGFANCNFSM6AAAAAATA4O64U>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
OS is Ubuntu 22.04 LTS. Docker 23.0.3. And after restarting the container a couple hours ago it's back up to my Portainer memory limit (4GB). |
Hey @haugene! Yeah, I do think we're onto something! I don't think I'm quite ready to try '5.0.2' yet, since it's making the jump to Transmission 4.0 (although 4.0.3 which skipped some growing pains). Surprisingly, I'm pretty luddite about these things. Let me mull over it over the weekend to read more about 4.0. Thanks for everything you all do! |
My problems were also resolved using the focal tag. I just downloaded the 5.0.2 version and will give that a try and share my results. Update: No problems so far all looks good |
having the same problem with :latest on a synology with latest dsm and, it's just using all the available memory. |
Please try the :focal branch |
I had random server crashes on a dedicated hetzner for a while, timing of some could be associated with transmission activity, but I didn't investigate the ram usage then. I updated to :5.0.2 specifically, and some time after it I noticed ~20GB memory usage by this container, and set mem_limit: 4g in compose. The crashes stopped, but the usage level is always at 4GB now. Looks like I'm unable to switch to :focal cause it gives me Can I try any other branch in the meanwhile? |
I just noticed I had one torrent throwing the error "file name too long" and thought, maybe it's a trigger, but it's not. Removing the bad torrent, switching back to This is on DSM 7.1.1-42962 Update 5. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
Anyone tried the latest branch with Transmission 4.0.4 to see if the issue still persists? I'm reluctant given that the focal branch works without issue Update: Running this version now, all looks good |
I'm running 4.0.4 and it seems stable. Nothing crazy like I posted above. I see 10GB cache and 65MB memory (according to Portainer) after running 5 hours and it hasn't changed at all in the last 15 minutes. Synology reports also very low memory usage and most of all, it's flat, it doesn't increase. |
yeah, I think for now this issues can be closed..if anyone encounters similar issues please feel free to comment and we can discuss if we re-open or create a new thread |
I don't have any data (yet), but I had to restart my container recently after it has eaten through my RAM and swap. Will try to analyze if it happens again. |
I am continuing to have this issue. Running latest and DSM 7.2.1-69057. When I tried the focal branch as asked for above, I get the same error as enchained. (Looks like I'm unable to switch to :focal cause it gives me Options error: Unrecognized option or missing or extra parameter(s) in /etc/openvpn/nordvpn/ch403.nordvpn.com.ovpn:22: data-ciphers (2.4.7) and a container crash loop.) |
Still got this on 4.3.2 running in k3s with ~5k torrents, uh |
Is there a pinned issue for this?
Is there an existing or similar issue/discussion for this?
Is there any comment in the documentation for this?
Is this related to a provider?
Are you using the latest release?
Have you tried using the dev branch latest?
Docker run config used
Current Behavior
The Transmission Container uses over 10 GB of memory after running for 10 days with around 25 torrents.
Expected Behavior
I expect the container to not use over 10 GB of memory when only seeding a couple of torrents at a time.
How have you tried to solve the problem?
Works without issue for version 4.2 (Ubuntu 20.04 and Transmission 2,X)
Log output
HW/SW Environment
Anything else?
There appears to be a fix in a later version of Transmission 3, mentioned on the Transmission Github:
transmission/transmission#3077
It appears this Nightly build fixes the issue.
The text was updated successfully, but these errors were encountered: