Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubectl apply errors with resource mapping not found for name: "cronjobs.batch" #159

Closed
Birdrock opened this issue Aug 18, 2022 · 3 comments · Fixed by #160
Closed

Kubectl apply errors with resource mapping not found for name: "cronjobs.batch" #159

Birdrock opened this issue Aug 18, 2022 · 3 comments · Fixed by #160

Comments

@Birdrock
Copy link

/wave from the Korifi team.

When following the instructions at https://github.com/servicebinding/runtime/releases/tag/v0.1.1 and running kubectl apply -f https://github.com/servicebinding/runtime/releases/download/v0.1.1/servicebinding-runtime-v0.1.1.yaml, we get the following output:

namespace/servicebinding-system created
customresourcedefinition.apiextensions.k8s.io/clusterworkloadresourcemappings.servicebinding.io created
customresourcedefinition.apiextensions.k8s.io/servicebindings.servicebinding.io created
serviceaccount/servicebinding-controller-manager created
role.rbac.authorization.k8s.io/servicebinding-leader-election-role created
clusterrole.rbac.authorization.k8s.io/servicebinding-aggregate-role created
clusterrole.rbac.authorization.k8s.io/servicebinding-k8s-workloads-role created
clusterrole.rbac.authorization.k8s.io/servicebinding-manager-role created
clusterrole.rbac.authorization.k8s.io/servicebinding-metrics-reader created
clusterrole.rbac.authorization.k8s.io/servicebinding-proxy-role created
rolebinding.rbac.authorization.k8s.io/servicebinding-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/servicebinding-aggregate-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/servicebinding-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/servicebinding-proxy-rolebinding created
configmap/servicebinding-manager-config created
service/servicebinding-controller-manager-metrics-service created
service/servicebinding-webhook-service created
deployment.apps/servicebinding-controller-manager created
certificate.cert-manager.io/servicebinding-serving-cert created
issuer.cert-manager.io/servicebinding-selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/servicebinding-admission-projector created
validatingwebhookconfiguration.admissionregistration.k8s.io/servicebinding-trigger created
validatingwebhookconfiguration.admissionregistration.k8s.io/servicebinding-validating-webhook-configuration created
error: resource mapping not found for name: "cronjobs.batch" namespace: "" from "https://github.com/servicebinding/runtime/releases/download/v0.1.1/servicebinding-runtime-v0.1.1.yaml": no matches for kind "ClusterWorkloadResourceMapping" in version "servicebinding.io/v1beta1"
ensure CRDs are installed first

This exits with 1. If we run kubectl apply again, it finishes with a 0 exit code. If we then try to delete the servicebinding-runtime with kubectl delete -f https://github.com/servicebinding/runtime/releases/download/v0.1.1/servicebinding-runtime-v0.1.1.yaml we get the following output:

namespace "servicebinding-system" deleted
customresourcedefinition.apiextensions.k8s.io "clusterworkloadresourcemappings.servicebinding.io" deleted
customresourcedefinition.apiextensions.k8s.io "servicebindings.servicebinding.io" deleted
serviceaccount "servicebinding-controller-manager" deleted
role.rbac.authorization.k8s.io "servicebinding-leader-election-role" deleted
clusterrole.rbac.authorization.k8s.io "servicebinding-aggregate-role" deleted
clusterrole.rbac.authorization.k8s.io "servicebinding-k8s-workloads-role" deleted
clusterrole.rbac.authorization.k8s.io "servicebinding-manager-role" deleted
clusterrole.rbac.authorization.k8s.io "servicebinding-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "servicebinding-proxy-role" deleted
rolebinding.rbac.authorization.k8s.io "servicebinding-leader-election-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "servicebinding-aggregate-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "servicebinding-manager-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "servicebinding-proxy-rolebinding" deleted
configmap "servicebinding-manager-config" deleted
service "servicebinding-controller-manager-metrics-service" deleted
service "servicebinding-webhook-service" deleted
deployment.apps "servicebinding-controller-manager" deleted
certificate.cert-manager.io "servicebinding-serving-cert" deleted
issuer.cert-manager.io "servicebinding-selfsigned-issuer" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "servicebinding-admission-projector" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "servicebinding-trigger" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "servicebinding-validating-webhook-configuration" deleted
Error from server (NotFound): error when deleting "https://github.com/servicebinding/runtime/releases/download/v0.1.1/servicebinding-runtime-v0.1.1.yaml": the server could not find the requested resource (delete clusterworkloadresourcemappings.servicebinding.io cronjobs.batch)

This has a 1 exit code.

Interestingly, applying the servicebinding-runtime again to reinstall exits 0.

@scothis
Copy link
Member

scothis commented Aug 19, 2022

Welcome @Birdrock thanks for taking the reference implementation for a spin. Sorry you ran into an issue.

This is the classic defining a CRD and a resource of that CRD in the same kubectl command. kubectl process each resource in a file in order. If the cluster can reify the ClusterWorkloadResourceMapping CRD before it encounters an instance of a ClusterWorkloadResourceMapping it will install without error. During uninstall, removing the CRD will implicitly delete the resources of that kind, so when kubectl tries to delete the resource it's already gone.

In the apply and delete case, running the command a second time is safe and will cleanup any lingering state the first execution missed. But that's not a great experience.

There are a couple options:

  1. split the single yaml file into two a core file and a mappings file, with two independent apply commands
  2. use a tool like kapp that can deploy everything all at once.

While I personally use kapp for everything (and what we use in CI and for builds from source), I'd lean towards the first option to make the kubectl experience first'ish class.

@baijum
Copy link
Contributor

baijum commented Aug 19, 2022

A third option is to use kubectl create -f <file.yaml>.

@scothis
Copy link
Member

scothis commented Aug 19, 2022

Published in https://github.com/servicebinding/runtime/releases/tag/v0.2.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants