-
-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance & lower memory usage #175
Comments
@DiegoRBaquero Agreed, this is important. From your testing, do you know which objects are using the most memory? |
@feross Took a heapdump: https://webseed.btorrent.xyz/heapdump-1587638310.868854.heapsnapshot Other ways to approach this problem:
|
Tested uWS and it was a HUGE improvement (from 450MB to 30MB RAM for 800+ peers). The only thing lost is the real IP address (which is never used, actually protects peers' privacy) and the full headers of the request (Again, protects peers' privacy from trackers' operators but would not allow filtering or blocking). |
That looks interesting! Nice work! Have you looked at the heap when using µWS? |
@yciabaud I didn't even bother, but I could get a heapshot if you'd like to see if there's anything else we can improve. We are only left with clearing 1-dead-peer-only zombie infoHashes (#164) By the way, guys, could I get comments on: https://medium.com/@diegorbaquero/%C2%B5ws-as-your-next-websocket-library-d34209686357#.twgf337zc Thanks |
@DiegoRBaquero interesting. Did you try to disable permessage-deflate in both wss.on('connection', (ws) => {
ws.upgradeReq = null;
}); I'm actually not sure if this works as expected.
|
I also see that the |
@lpinca Had not try either the options you describe, had I seen them in the docs por better performance I would have tried them before. I'll be monitoring uWS performance for the weekend and PR the change on monday if everything is cool. |
I'm not sure because there might be other references that prevent the
I've never said that
This makes me sad... Did you actually read why we can't do that? I guess not. This is what we are doing now: https://github.com/primus/primus/blob/uws/transformers/uws/server.js#L92-L95. If you know how to improve it further given our constraints please tell us or send a pr. You have banned me from uWebSockets but I haven't banned you from Primus because I think every contribution is valuable even the ones you don't agree with and the ones that are not easy to handle but I'm the dictator, ok. I've only suggested how to improve ws performance here. |
@DiegoRBaquero can we move forward on this? Do you still think we should switch to uWS? Did you test disabling permessage-deflate? |
@DiegoRBaquero Done! Results from the last 12 hours are aprox: 5x less memory consumption and 3x more network bandwidth. |
@alxhotel Were there any restarts due to errors? I think I caused a restart when loading /stats, which evicts old peers. |
If you want a sweetspot between low memory usage and low network usage you can spend some time optimizing the WebTorrent protocol. I assume it is based on JSON right now and that is very ineffective vs something binary. I bet you can improve it a ton in size. |
@alexhultman We could try switching back to raw bencoding since that's what the other tracker implementations (http and udp) use. But we would need to consider a transition period where we supported both bencoding/json for several months until clients are updated. |
@DiegoRBaquero Yep. See issue #205 |
I seeded a 4G file, it had token almost 8G memory. |
Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward? |
Although we have made several improvements and clear memory leaks, everything can always be improved.
In order for the webtorrent-webrtc-based projects to reach more users easily, the trackers need to be able to scale with greater performance.
At the time of writing, 1K connected users go for about 500MB of RAM and each connected user is around 0.45MB (BTorrent's tracker)
The text was updated successfully, but these errors were encountered: