-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
auto_eject_drop as an alternative to auto_eject_hosts #213
Comments
A boolean value that controls if auto ejected hosts should be dropped from the hash ring. If set to false, failing hosts will immediately reply timeout. Defaults to true. See twitter#213 for more information
Seems that the problem with mget and del commands are still exists :-( Config: supercluster:
|
Not exactly the same as your use case, but see #608 An upcoming 0.6.0 is planned with the following:
|
Background:
If one or more redis server goes offline and I choose the twemproxy pool that has the failing redis server and the key is hashed to this redis server, I have to wait the timeout before I get my reply and try the other twemproxy pool. Every request that goes to this redis server has to wait. This is bad... and I cannot use auto_eject_hosts since it changes the shards / hash ring.
I though about implementing a new config, called
auto_eject_drop
with valuestrue
(current behavior, default) andfalse
(my proposal) that lets you:The way to do that is maybe changing req_forward() so that if a faling server got asked, the message will not be enqueued and. I dont know if this is the best solution but is the one that has minimal code change.
Any thoughts?
The text was updated successfully, but these errors were encountered: