Skip to content
This repository has been archived by the owner on Mar 27, 2023. It is now read-only.

No CRD commited to kube-api #82

Open
adyanthaya17 opened this issue Apr 23, 2020 · 3 comments
Open

No CRD commited to kube-api #82

adyanthaya17 opened this issue Apr 23, 2020 · 3 comments

Comments

@adyanthaya17
Copy link

Hi,

I have currently run the prerequistes and am currently trying to run the 6th step of: Use Helm 3 to install the istio-init chart. This will install all of the required Istio Custom Resource Definitions.

After running the helm install and trying to verify, the resources show 0 instead of 29 creds as in document:

kubectl get crds | grep 'istio.io|cert-manager.io|aspenmesh.io' | wc -l
No resources found in default namespace.
0

Could you please guide me on what might be missing that the resources are not getting listed out.

Thank you.

@myshkin5
Copy link
Contributor

Hi Aishwarya,

The istio-init chart creates three jobs in the istio-system namespace. You can see the status of the jobs by looking at the pods they create using the following command:

kubectl get pods -n istio-system

After a successful install, you will see something like:

istio-init-crd-10-1.4.6-nw9gr                         0/1     Completed   0          30d
istio-init-crd-11-1.4.6-rcp2b                         0/1     Completed   0          30d
istio-init-crd-14-1.4.6-jfhx2                         0/1     Completed   0          30d

I'm guessing yours won't look like that. Next you will want to check the status of the pod with the following command (substitute in one of your pod names):

kubectl get pod -n istio-system istio-init-crd-10-1.4.6-nw9gr -oyaml

The output can be a little long but it should help diagnose the problem.

Hope this helps!

Dwayne

@adyanthaya17
Copy link
Author

Hi I have tried with the above commands and the pods instead of showing completed status they show in pending state. On further information on the pending pods, this is the below lines I see:

status:
conditions:

  • lastProbeTime: null
    lastTransitionTime: "2020-04-29T03:35:51Z"
    message: '0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master:
    }, that the pod didn''t tolerate, 2 node(s) had taint {node.kubernetes.io/unreachable:
    }, that the pod didn''t tolerate.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
    phase: Pending
    qosClass: Burstable

I am following the document as it is and am not sure why the pods are not coming in completed status.

@myshkin5
Copy link
Contributor

Hi Aishwarya,

It looks like your cluster isn't healthy. These jobs won't schedule on the master node which is expected (1 node(s) had taint {node-role.kubernetes.io/master:}, that the pod didn't tolerate). But your worker nodes are unreachable (2 node(s) had taint {node.kubernetes.io/unreachable:}, that the pod didn't tolerate). I'm guessing that you won't be able to schedule anything to run on your cluster.

Dwayne

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants