-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add connection pool put back #571
Conversation
Hi @wuzuoliang could you describe what issue this is trying to fix? |
Just following up on this @wuzuoliang ? |
something like this |
That's an example of a similar function, but what is the benefit of that? |
waiting for this pr |
I can't see the benefit of this I'm afraid. Are you able to explain why this is needed? |
the LIFO connection get mode will lead to load inbalance |
Sorry I don't follow what do you mean by load in balance? |
We use Alibaba Cloud's redis. Cloud redis consists of redis and redis proxy. When a client connects to cloud redis, conn forwards it to the corresponding redis through a proxy. A cloud Redis instance usually has multiple proxies. The current redigo conn pool uses fifo mode. It will use the most recently used conn first, and the agents connected later will be busier than other agents. Changing to lifo mode will more balance the load of multiple agents. |
Thanks for the details. Seems like the unbalanced nature is actually an issues with the LB which is choosing which proxy the client is connecting to. FIFO nature of is required to ensure unused connections are freed up when not needed. Without this a spike in connections can never be reduced. FIFO should not adversely impact the behaviour of the setup you described, as long as the LB is doing its job correctly. So the question is what is doing the load balancing between the client and the 3 proxies? |
The LB decides which proxy to use when creating a connection. After the establishment is successful, the proxy will not be replaced. So the fifo mode will make the conn at the front of the pool take more work. |
If I understand the description of the problem correctly it sounds like the issue is with the proxy, so that's where the fix should be done. This is because, as mentioned, changing the pool to LIFO breaks connection clean up so is not something we should do. |
Just add an option and let the user decide. |
I may not explain clearly, please refer to for details redis/go-redis#1819 . |
The problem with that is it will break connection clean up, causing a different problem. |
I understand the problem, in short LIFO results in unbalanced use of connections in the pool, but this is by design to ensure that old connections are expired. That said if the pool is seeing load which requires multiple concurrent connections and the LB created said connections across the servers evenly to start with you should see relatively consistent server use too. If the connections aren't created evenly then switching to FIFO won't solve the problem, and if the load doesn't demand multiple concurrent connections then I would argue there's no need for multiple servers anyway. As mentioned switching to FIFO will break connection clear down and result in pool being unable to reduce its size from peak, which is a nasty side effect. This is actually something that I fixed in the database/sql too. In your case what is doing the load balancing, and what is your access pattern? |
@stevenh old connections are expired can be solved by have a option to set the max life conn time, then force close the connect |
there is a special condition that the connect rise suddenly and one proxy very slowly, FIFO will always get the slowly connect and the state will continued |
I'm not sure what you're trying to say here, could you clarify? |
about break connection clear down and result in pool to reduce its size from peak. I think it can be resolved. #589 |
Slow connection processing results in a connection that is always at the end of the queue and is not eliminated because it is idle. In this way, even if the LB creates new connections evenly between server nodes, the less performing server nodes will have more connections。 |
If you're talking about a single or small number connections then that should have little to no impact, so I'm still struggling to understand the issue? |
cancel |
No description provided.