You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 10, 2021. It is now read-only.
When performing a node upgrade (via python api) to upgrade a node-pool, the operation timed out and left me in the following state:
1 node gone in Kubernetes (kubectl). Have 2 node pools over 3 zones, so 6 nodes usually, but was down to 5.
1 new compute instance (instance group) but not in k8s.
Operations DONE on google side, but statusMessage indicating a Time out.
Operation:
operation-1527064589070-xxxxxx UPGRADE_NODES us-central1-a vpc-pu-np-2 Timed out waiting for cluster initialization. Cluster API may not be available. DONE 2018-05-23T08:36:29.070342031Z 2018-05-23T10:22:06.118365098Z
# And some describe output
status: DONE
statusMessage: Timed out waiting for cluster initialization. Cluster API may not be
available.
Fortunately I had 2 clusters with everything in place minus one route and their upgrade had went swimmingly as expected. So I narrowed the issue down to the following route: module.nat.google_compute_route.nat-gateway
After deleting that route, I initiated the node-pool upgrade and picked up from where it left off (I think). Well the end result was successful afterwards.
I think (or guessing) that the node could not register with the Kubernetes master / api. Perhaps the node is not yet tagged or labeled and hence some routes or firewalls would not apply to the new node until it had registered with k8s api?
Next time I will try and look a bit closer at the cause of the timeout, but I think its possibly worth a note that the routing may affect cluster upgrades.
Issue
I have configuration following the example here.
When performing a node upgrade (via python api) to upgrade a node-pool, the operation timed out and left me in the following state:
DONE
on google side, but statusMessage indicating a Time out.Operation:
Fortunately I had 2 clusters with everything in place minus one route and their upgrade had went swimmingly as expected. So I narrowed the issue down to the following route:
module.nat.google_compute_route.nat-gateway
After deleting that route, I initiated the node-pool upgrade and picked up from where it left off (I think). Well the end result was successful afterwards.
I think (or guessing) that the node could not register with the Kubernetes master / api. Perhaps the node is not yet tagged or labeled and hence some routes or firewalls would not apply to the new node until it had registered with k8s api?
Next time I will try and look a bit closer at the cause of the timeout, but I think its possibly worth a note that the routing may affect cluster upgrades.
Versions
Notes, when upgrading I was on google provider v1.12.0
My version of this module / example was at the commit: dc3af16
Reproduction
The text was updated successfully, but these errors were encountered: