You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Shedding threshold increases if more memory is consumed within a sampling window. This can be either due to an increase in request rate or an increase in the amount of memory consumed by requests.
If that is the case, it is ok that we don't GC very often and wait a bit more. That will lead to more pause time, as more memory will be consumed before a GC is triggered. That said, what happens if the trend changes and requests start consuming low memory?
Currently, what is going to happen is that a lot of requests would need to arrive before GC kicks in and the pause time will probably be the same as before (assuming the pause time is proportional to the amount of memory to be released). GCI-Go is based on the amount of memory used to process requests instead of the total amount of memory consumed and it this value is right-bounded to 512MB, so collecting every time the requests consume 512MB doesn't sound like a problem. Opening this issue so we keep this in mind.
Important to notice that the sampling window adjusts itself in both ways, depending on how much memory is consumed by a set of requests. Maybe we would like to keep consistency in the shedding threshold? One solution would be to decrease ST if it is sample time and ST hasn't been reached. This value would be left-bounded by the default ST, which is 50MB.
Shedding threshold increases if more memory is consumed within a sampling window. This can be either due to an increase in request rate or an increase in the amount of memory consumed by requests.
If that is the case, it is ok that we don't GC very often and wait a bit more. That will lead to more pause time, as more memory will be consumed before a GC is triggered. That said, what happens if the trend changes and requests start consuming low memory?
Currently, what is going to happen is that a lot of requests would need to arrive before GC kicks in and the pause time will probably be the same as before (assuming the pause time is proportional to the amount of memory to be released). GCI-Go is based on the amount of memory used to process requests instead of the total amount of memory consumed and it this value is right-bounded to 512MB, so collecting every time the requests consume 512MB doesn't sound like a problem. Opening this issue so we keep this in mind.
Important to notice that the sampling window adjusts itself in both ways, depending on how much memory is consumed by a set of requests. Maybe we would like to keep consistency in the shedding threshold? One solution would be to decrease ST if it is sample time and ST hasn't been reached. This value would be left-bounded by the default ST, which is 50MB.
@joaoarthurbm @thiagomanel ideas? thoughts?
The text was updated successfully, but these errors were encountered: