You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
Hey,
I'm playing around with kubemci to figure out if it's a good match for the product I'm currently working on. I tried the zone-printer demo and then tried out manually going in and delete the pod that was running in the cluster closest to me.
The result was that the service went down until the pod had restarted. Is this expected behaviour? I was hoping the the traffic would fail over to another cluster.
The text was updated successfully, but these errors were encountered:
Yes traffic should fail over to another cluster. Maybe the pod was restarted before GCLB detected that the pod was down?
Can you try changing the health check configuration so that it detects failures faster?
You cannot use kubemci to modify it, but can use gcloud or Google Cloud Console directly to update the Health check created by kubemci. #135 has some relevant discussion about this.
Many customers run multiple replicas in their cluster to mitigate this issue. Setting up Cluster autoscaling and Pod autoscaling will help as well.
Thanks for your reply Nikhil. I'll look into updating the health check configuration and I'll report back if this solves the problem.
How fast can I expect failover to happen when a cluster goes down?
Also I was wondering about cluster auto scaling and kubemci. Since you're recommending it I suppose it's supported. How fast will GCLB discover new nodes added to the cluster by the auto scaler?
Hey,
I'm playing around with kubemci to figure out if it's a good match for the product I'm currently working on. I tried the zone-printer demo and then tried out manually going in and delete the pod that was running in the cluster closest to me.
The result was that the service went down until the pod had restarted. Is this expected behaviour? I was hoping the the traffic would fail over to another cluster.
The text was updated successfully, but these errors were encountered: