-
Notifications
You must be signed in to change notification settings - Fork 582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Couldn't get resource list for metrics.k8s.io; connect: no route to host #3288
Comments
Initially we used network:
plugin: calico Now we've switched to default network:
plugin: flannel
options: {} and redeployed this testing cluster: "network": {
"plugin": "flannel",
"options": {
"flannel_backend_port": "8472",
"flannel_backend_type": "vxlan",
"flannel_backend_vni": "1"
}, Somebody recommended adding more resources to - apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats # added
- namespaces # added
- configmaps # added
verbs:
- get
- list
- watch but I'm afraid that didn't help. |
Here are some other steps I've made without much success.
|
It was working with |
Finally I has started working bot with Calico and Flannel, surely not at the same time. 😄 |
@manuelbuil, interestingly today we hit this issue again with
However upon investigation we didn't find any interfaces with cali*. Is there anything we should look closely? |
This looks like a problem with Calico deployment. NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
One thing concerns me. AFAIK file |
Let me provide another piece of details related to issues with Calico
|
Could you get the logs from the image that is crashing please? I wonder if there is something like apparmor or selinux making it impossible for Calico to write binaries in |
@manuelbuil, this is CentOS 7.9 and I'm positive it has SELinux disabled. |
This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions. |
I still facing this issue, do we have a fix? |
This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions. |
RKE version:
Kubernetes version:
As reported by
kubectl get nodes
Docker version: (
docker version
,docker info
preferred)As reported by
kubectl get nodes
Operating system and kernel:
uname: 3.10.0-1160.el7.x86_64
Created anew with
rke up
on four VMs (QEMU/KVM) CentOS 7.9.The testing environment is comprised of three nodes with
controlplane,etcd
and oneworker
.Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
QEMU/KVM (kernel 5.10.152)
Steps to Reproduce:
Frankly hard to report any specific as the setup was very straightforward
rke config
Official instructions was followed: https://rke.docs.rancher.com/installation. Certificates were not customized, used default options.
Most of answers were default. Only calico was chosen instead of default flannel.
rke up
All images were pulled from Docker Hub. These nodes had been fresh until I started running
rke up
on them.Results:
Initially we got embarrassed by behaviour of
kubectl
tool which caused the same four error lines complaining about metrics-serverWe started looking into the issue and our investigation led us to several other problems.
metrics-server-xxxxxxxxxx-xxxxx
is in status CrashLoopBackOffIts logs show these lines below.
k logs metrics-server-xxxxxxxxxx-xxxxx
While
calico-node
itself looks healthy.calico-kube-controllers
are full of these messages.Interestingly, despite these issues, this cluster is still capable of running workload.
I'm looking for a solution. Meanwhile I've found something resembling our issues:
The text was updated successfully, but these errors were encountered: