Skip to content
This repository has been archived by the owner on Mar 27, 2023. It is now read-only.

RFE: Report initializerconfiguration with no running initializer #2

Open
andrewjjenkins opened this issue Jan 12, 2018 · 0 comments
Open

Comments

@andrewjjenkins
Copy link
Contributor

I tried to upgrade Istio in my cluster and got "kubectl apply" timeouts because deployments were hanging. It's my fault but I think this might make a good vetter.

Here's what I did:

Original Install

  1. kubectl apply -f istio-auth.yaml (for istio 0.2.12)
  2. kubectl apply -f istio-initializer.yaml (for istio 0.2.12)

Upgrade:

  1. kubectl delete namespace istio-system"
    ...wait for pods, deployments, services, namespace to be cleaned up...
  2. kubectl apply -f istio-auth.yaml (for istio 0.4.0)
    ... timeouts creating deployments ...
    ... no new timeouts can be created ...

The problem is that "kubectl delete namespace istio-system" only deleted objects in that namespace. I had an InitializerConfiguration named "istio-sidecar" that was still around (unnamespaced):

  apiVersion: admissionregistration.k8s.io/v1alpha1
  kind: InitializerConfiguration
  metadata:
    name: istio-sidecar
  initializers:
    - name: sidecar.initializer.istio.io
      rules:
        - apiGroups:
            - "*"
          apiVersions:
            - "*"
          resources:
            - deployments
            - statefulsets
            - jobs
            - daemonsets

But, I did not have the istio-initializer deployment any more, so there was no running pod that was able to implement the admissionregistration and respond to new deployments. Thus, all my deployments hung.

It's my fault: I should have "kubectl delete -f" instead of just deleting the namespace or done some other upgrade. However, it took me a while to figure out what had happened - it'd be cool if Istio Vet could tell me. I think the vetter would do:

  1. If there is no InitializerConfiguration for sidecar.initializer.istio.io, vetter is done, no warnings.
  2. Find the istio-initializer deployment that is handling sidecar.initializer.istio.io. If it does not have any pods running, then report a warning like:

Warning: Cluster has a sidecar InitializerConfiguration for sidecar.initializer.istio.io but no running controller pods. This means new deployments, statefulsets, jobs and daemonsets will hang in the uninitialized state.

  1. If that warning was rendered, go and check for any deployments, ... that are uninitialized (like kubectl get deployment -n istio-system --include-uninitialized=true). Render a warning like:

Warning: Deployment foo-depl is in the uninitialized state. This may be because there are InitializerConfigurations that have no running controller pods.

It would be great if this vetter could be generic for any hanging deployments. This would require understanding what controllers should be handling what InitializerConfigurations; it wasn't immediately obvious to me how to do that (it looks like the controllers call back to the API server, see all the new deployments, and then just retire whatever initializers they handled?).

Here's an example backtrace from kube-apiserver (1.7.10) for a hanging deployment:

I0112 21:30:57.929864       5 trace.go:61] Trace "Create /apis/extensions/v1beta1/namespaces/istio-system/deployments" (started 2018-01-12 21:30:27.928577864 +0000 UTC):
[361.213µs] [361.213µs] About to convert to expected version
[573.546µs] [212.333µs] Conversion done
[607.157µs] [33.611µs] About to store object in database
"Create /apis/extensions/v1beta1/namespaces/istio-system/deployments" [30.001249825s] [30.000642668s] END
I0112 21:30:57.929935       5 wrap.go:42] POST /apis/extensions/v1beta1/namespaces/istio-system/deployments: (30.001466159s) 504
goroutine 83497552 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc433aa6620, 0x1f8)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/log.go:219 +0xdd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc433aa6620, 0x1f8)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/log.go:198 +0x35
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*responseWriterDelegator).WriteHeader(0xc433ed1dd0, 0x1f8)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:135 +0x45
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x3c419df, 0x10, 0x7f51f9e849a8, 0xc424d01a70, 0x77d2260, 0xc423012cc0, 0xc42a946a00, 0x1f8, 0x77ae460, 0xc424faee80)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:91 +0x8d
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x7f51f9cadaf8, 0xc43284f3e0, 0x77da0e0, 0xc4210f47e0, 0x3c30a51, 0xa, 0x3c2b391, 0x7, 0x77d2260, 0xc423012cc0, ...)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:116 +0x29e
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x7f51f9cadaf8, 0xc43284f3e0, 0x77adfe0, 0xc424faee00, 0x77da0e0, 0xc4210f47e0, 0x3c30a51, 0xa, 0x3c2b391, 0x7, ...)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:135 +0x165
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(0xc429f59040, 0x77adfe0, 0xc424faee00, 0x77d2260, 0xc423012cc0, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:80 +0x10e
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x77d2260, 0xc423012cc0, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:478 +0x1131
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc433ed1d40, 0xc422aa2300)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1034 +0xd5
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc433ed1d40, 0xc422aa2300)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:104 +0x1cf
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc420b8f200, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0xb8d
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc420b8f200, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3c3add4, 0xe, 0xc420b8f200, 0xc420c2c000, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:153 +0x6e7
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc4206b1160, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        <autogenerated>:64 +0x86
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc42259c180, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:91 +0x122
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc422c4d480, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x3dd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc425913d50, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x72
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3c3e3cc, 0xf, 0xc421b5b710, 0xc425913d50, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:161 +0x301
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc4238df380, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        <autogenerated>:64 +0x86
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:50 +0x30c
net/http.HandlerFunc.ServeHTTP(0xc42310e500, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /usr/local/go/src/net/http/server.go:1942 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:47 +0x226a
net/http.HandlerFunc.ServeHTTP(0xc42310edc0, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /usr/local/go/src/net/http/server.go:1942 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:73 +0x2b0
net/http.HandlerFunc.ServeHTTP(0xc4223d7d10, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /usr/local/go/src/net/http/server.go:1942 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xef
net/http.HandlerFunc.ServeHTTP(0xc4238df3e0, 0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /usr/local/go/src/net/http/server.go:1942 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPanicRecovery.func1(0x77d22e0, 0xc433aa6620, 0xc42a946a00)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:41 +0x11b
net/http.HandlerFunc.ServeHTTP(0xc4238e0060, 0x7f51f98aa110, 0xc423012cb0, 0xc42a946a00)
        /usr/local/go/src/net/http/server.go:1942 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc4238e00c0, 0x77dbb20, 0xc423012cb0, 0xc42a946a00, 0xc422aa21e0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:89 +0x8d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:91 +0x1c0

logging error output: "{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Timeout: request did not complete within allowed duration\",\"reason\":\"Timeout\",\"details\":{},\"code\":504}\n"
 [[kubectl/v1.8.1 (darwin/amd64) kubernetes/f38e43b] 1.2.3.4:15479]
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant