-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Virtual-kubelet skipping sync pod #14
Comments
@lmq1999 the failed pod is kube-proxy pod created by k8s. Since we are using virtual node so we don't need the kube-proxy process. Therefore, the failed pod is not important. However, let me know if you create a normal pod and failed. |
Hmmmm I meet problems too with normal pod either I try 2 pods 1st is in the README.md of this repo 2nd is the kubia.yaml from kubernetes in action (luska) https://github.com/luksa/kubernetes-in-action Normaly they would schedule to k8s-worker node but in here they stuck in creating
myapp-pod already creating for 5m debug mode:
I create k8s-master node by using kubeadm as well
The 10.10.10.0/24 is the same as external network from openstack:
|
@lmq1999 , several things to check:
|
Zun run correctly Pod is created but fail
zun-compute logs:
zun-api logs:
|
I read the code in |
Latest update: After reconfig openstack from linuxbridge to ovs:
zun-api.log
zun-compute:
|
For the linux bridge support, I will add support to it. For the "failed to connect to all addresses" error, it seems your zun-cni-daemon is not installed correctly. Could you double check (systemctl status zun-cni-daemon)? |
everything seem file except that the VIF in log show is it linux-bridge :/ zun-cni status:
zun-cni log:
When I enable debug-mode: zun-cni log:
zun-compute-log:
|
zun-cni-daemon doesn't receive any requests from containerd. Could you check the status of containerd (systemctl status containerd)? Perhaps, containerd doesn't have the right permission to accept grpc requests. Could you check the containerd's config file as well (/etc/containerd/config.toml)? |
contained services status:
the config file:
well I only see this file in openstack compute node, not in k8s node |
Right. This file needs to be configured in compute nodes. Will it work if you configure the "gid" then restart the containerd process? [grpc] Replace ZUN_GROUP_ID with the real group ID of zun user. You can retrieve the ID by (for example): getent group zun | cut -d: -f3 |
hmmm, i change like you said to gid=997
after create pod in virtual-kubelet, this new error appear:
containerd logs:
|
Lastest update: After I use containerd config to create a : And also change GID to 997 like you said: and I got this error, seem like be back to CNI:
|
CNI daemon logs:
|
I also have question, my kubernetes setup using:
that pod-network-cidr is not belong to any external, internal network from openstack |
It looks there is a bug on neutron hybrid plug, which I have to fix at zun side. The easiest work-around is to use openvswitch as firewall driver in neutron side: [securitygroup] Otherwise, you can wait for my fix on ovs hybrid driver. |
Tkank you very much It working perfectly now
|
I guess there is nothing wrong. However, the --pod-network-cidr option applies to normal nodes. The virtual node created by virtual-kubelet doesn't use that option. I also assume it is ok to use flannel together with virtual kubelet. Again, flannel applies to normal pods (not pods created by virtual-kubelet). |
Cool! |
uhmmmmm, I have another question ... it about networking in VK I use kubernetes wordpress demo link: https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ Of course I edit some tolerants so it can run:
When I check for svc:
So how to "access" to this pod, I already setup security group for port 22 and 80 but connection refuse |
The simplest way is to use a public facing neutron network instead of tenant network (i.e. 192.168.122.*). As a result, each pod has a public IP that is accessible from kubenetes service. Alternatively, you can create a nova VM in the tenant network and install kubenetes control plane in there. As a result, it can access pods in the same tenant. |
I use VM so the 192.168.122.* work as provider network. For example like this for more easier: I create nginx deployment via VK:
I can ping, curl like this:
but when I try wordpress
I can't curl to wordpress (192.168.122.213)
service:
netstat at k8s:
security group rules:
So what should i do in this situation> |
I found out that firewall_drivers = iptables_hybrid can still be use when set this service:
run as |
You might want to check the log of the wordpress container to confirm why the pod is not up. The container is running in CRI so you want to download the crictl tool to do it. |
hmmm ok it was my fault, the example was old and bug, try new one and still available So if VK and K8s-pod in seperated network, that cause problems that can't use hybrid k8s (half VK- half nodes) right I searching for some stuff and see this: https://docs.openstack.org/kuryr-kubernetes/latest/readme.html Its said that: With Kuryr-Kubernetes it’s now possible to choose to run both OpenStack VMs and Kubernetes Pods on the same Neutron network if your workloads require it or to use different segments and, for example, route between them. So it is possible to put both VK-pod and Nodes-pod in the same Openstack network ? |
Given my limit knowledge on kuryr-kubernetes, it sounds it is possible. VK with openstack provider allows you to create pod in a neutron tenant or provider network. If you have other tools (i.e. kuryr-kubernetes, calico) that can connect normal pods to neutron, it is possible to achieve what you said. |
I am having a quite funny situation: I have these network:
well for short describes: provider = external these pod and service network set to dhcp=no (I install kuryr-kubernetes and it work for normal pod) But when I create VK pod in that network, it running but can't ping to that IP address although I can still ping to normal pod IP address Any advices ? here for examples:
When I curl these:
but when I use selfservice network to create VK pod:
I can only ping it but when it come to pod network:
I cant even ping it, but still can curl (worker) pod on the same network:
|
My best guess is the security group blocking the traffic. I would trace down the security group/port/subnet of the "normal" pod and VK pod and check if there is any differences. |
Well after tracing down, they use the same security group and subnet. Btw are you having any overview models of virtual-kubelet providers openstack zun. Is it possible to run 2 virtual-kubelet with the same providers ? And what exaclty openstack-zun providers do in VK, where the pod actually running in (virtual-kubelet (k8s-master) node or in controller node) ? |
Could you describe how to reproduce the issue?
In theory, it is possible to run 2 VK with the same provider but I haven't tried that.
Zun provider will call Zun's API to create the pod (capsule). Eventually, the pod will be scheduled to a compute node. |
With first problems. I'm having 4 node: k8s-master The k8s-master and k8s-worker using kuryr-kubernetes so they will use network in neutron 1st is: 10.1.0.0/16 for cluster network Those network belongs to (k8s) user. The virtual-kubelet in k8s-master node also use (k8s) user. They also schedule into 10.1.0.0/16 network. It said (running) and have IP when use both. kubectl get pod -o wide & openstack capsule list (k8s user) I can't ping to it even I already allow icmp security group rule for 10.1.0.0/16 network and 10.2.0.0/16 network But because it's virtual kubelet, I can't show the logs or console those pod. But when I create normal pod (which I don't give them toleration), It schedule into k8s-worker node. It use 10.1.0.0/16 network like I said and I have no problems ping, curl to it. I follow the kuryr-kubernetes install here combine with openstack docs: https://github.com/zufardhiyaulhaq/kuryr-kubernetes/blob/master/Installation/kuryr.md Second question: Yesterday I have try connect 2 VK to same provider but it only have 1 compute node, if the 2nd connected, they will push the 1st out if it is the same user. I will add more compute node and test today 3rd question: If the pod is scheduled into a compute node, are there anyway to know which compute node the pod in if have multiple compute node, I didn't find any docker container in those compute node With new model k8s-master I'm trying to archive that setup an wordpress kubernetes which: But when I curl to the IP of the wordpress it return nothing, not even something like data baseconnection error, so are there any possible way to view VK-pod logs ? |
There is a 'host' field in capsule. Normal users cannot see this field, but users with admin privilege can see this field so knows which compute host runs this capsule. The capsule is not created in docker. It is created in containerd. You can download the tool 'crictl' (https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md) to list those pods. |
Ok tks you, I can access into pod from compute node now. Aren't there anyway to setup for VK to use docker instead of containerd? |
Docker doesn't supports the concept of "pod" so VK won't work with docker very well. I would highly recommend containerd for VK. |
I have install VK and run it, but it keep skipping pod like this
I install k8s master on openstack controller node:
So how to deal with this problems ?
The text was updated successfully, but these errors were encountered: