You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's my understanding that when a server in the twemproxy pool gets ejected, the other server in the pool should still be available for caching. It seems that when I take out memcached-1 only, the proxy itself becomes unavailable. If I take out memcached-2 from the pool, everything operates normally, except that there doesn't seem to be any indication in the logs that the server leaves or returns to the pool.
I have tested that both memcached servers are available directly. If I put one or the other memcached sever by itself in the pool configuration, they're available using the proxy, but only memcached-1 is available if I have them both in the pool. I've tried ordering them differently and it doesn't seem to make a difference. A tcpdump only ever shows traffic to memcached-1 when they are both in the pool. When nutcracker is restarted, I only see arp traffic going to one of the two servers, but never both.
To reproduce:
(nutcracker version 0.4.1 on centos 7)
/etc/nutcracker/nutcracker.yml
This sounds like expected behavior for the service, which does not retry failed requests against remaining server members. The client will need to respond appropriately to such failures, perhaps by retrying the request.
The "testing" key is mapped to a server in the pool, which would explain why you can deactivate "memcached-2" to no apparent effects, since "memcached-1" is selected to service the request. The pool is configured to eject hosts after 3 errors, so in the testing scenario that you have provided, I would expect the 4th request for key "testing" to evaluate against "memached-2".
Also, server_retry_timeout: 30000 means it will take 30 seconds before twemproxy attempts to reconnect - until 30 seconds have elapsed all traffic will be sent to the other server.
I think the planned heartbeat/failover patches in #608 may result in faster reconnections when a server recovers once those changes are merged into twitter/twemproxy, though that may change before the planned 0.6.0 release.
If twemproxy didn't reconnect after more than 30 seconds, the changes planned for 0.6.0 also refactors the reconnection logic significantly, and may end up fixing it.
It's my understanding that when a server in the twemproxy pool gets ejected, the other server in the pool should still be available for caching. It seems that when I take out memcached-1 only, the proxy itself becomes unavailable. If I take out memcached-2 from the pool, everything operates normally, except that there doesn't seem to be any indication in the logs that the server leaves or returns to the pool.
I have tested that both memcached servers are available directly. If I put one or the other memcached sever by itself in the pool configuration, they're available using the proxy, but only memcached-1 is available if I have them both in the pool. I've tried ordering them differently and it doesn't seem to make a difference. A tcpdump only ever shows traffic to memcached-1 when they are both in the pool. When nutcracker is restarted, I only see arp traffic going to one of the two servers, but never both.
To reproduce:
(nutcracker version 0.4.1 on centos 7)
/etc/nutcracker/nutcracker.yml
telnet 127.0.0.1 22122
ssh 10.10.10.33:
telnet console:
nutcracker logs for sequence:
The text was updated successfully, but these errors were encountered: