-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support cilium cni #287
Conversation
https://opendev.org/openstack/magnum/src/commit/6bb2c107ffe0b9c7a278856927d64bebde3d5c36/magnum/api/validation.py#L324
|
@okozachenko1203 Maybe we can simplifying the upstream change to this only: - supported_network_drivers = ['flannel', 'calico']
+ supported_network_drivers = ['flannel', 'calico', 'cilium'] |
sure, i made another patch along side |
@okozachenko1203 - good news, it seems upstreamed has merged the fix to allow it |
d62e105
to
d9305ba
Compare
d9305ba
to
6777466
Compare
root@kube-csqp0-rmggl-5jjss:/home/ubuntu# kubectl get po -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system cilium-c47bm 1/1 Running 0 77s 10.0.0.139 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system cilium-operator-68d4bbdf56-6jqh6 1/1 Running 0 91s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system cilium-operator-68d4bbdf56-fjrxw 1/1 Running 0 91s 10.0.0.139 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system cilium-qjdb7 1/1 Running 0 91s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system coredns-5d78c9869d-8876d 1/1 Running 0 2m20s 10.100.0.158 kube-csqp0-rmggl-5jjss <none> <none>
kube-system coredns-5d78c9869d-nr8m7 1/1 Running 0 2m20s 10.100.0.57 kube-csqp0-rmggl-5jjss <none> <none>
kube-system csi-cinder-controllerplugin-cd7ffbdf9-rl2fx 6/6 Running 0 91s 10.100.1.185 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system csi-cinder-nodeplugin-8fv7r 3/3 Running 0 77s 10.0.0.139 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system csi-cinder-nodeplugin-llvr2 3/3 Running 0 91s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system csi-nfs-controller-54fb58b59f-mlbvp 3/3 Running 0 89s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system csi-nfs-node-bpwxr 3/3 Running 0 77s 10.0.0.139 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system csi-nfs-node-g9sql 3/3 Running 0 89s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system etcd-kube-csqp0-rmggl-5jjss 1/1 Running 0 2m20s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system k8s-keystone-auth-v8m5c 1/1 Running 0 52s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system kube-apiserver-kube-csqp0-rmggl-5jjss 1/1 Running 0 2m20s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system kube-controller-manager-kube-csqp0-rmggl-5jjss 1/1 Running 0 2m20s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system kube-proxy-4sxm9 1/1 Running 0 77s 10.0.0.139 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system kube-proxy-shsfz 1/1 Running 0 2m20s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system kube-scheduler-kube-csqp0-rmggl-5jjss 1/1 Running 0 2m20s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system openstack-cloud-controller-manager-clqxr 1/1 Running 0 62s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none>
kube-system openstack-manila-csi-controllerplugin-0 4/4 Running 0 90s 10.100.1.149 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system openstack-manila-csi-nodeplugin-5fwpt 2/2 Running 0 77s 10.0.0.139 kube-csqp0-default-worker-bhfkv-qhg8l <none> <none>
kube-system openstack-manila-csi-nodeplugin-p9mgp 2/2 Running 0 90s 10.0.0.247 kube-csqp0-rmggl-5jjss <none> <none> |
c969665
to
2cb9e40
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you rebase this but also add Zuul jobs to test for both Calico and Cilium please?
9dd33b5
to
d0fef43
Compare
@okozachenko1203 can you look into the failures? |
Failed tests:
[It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
[It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
[It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] |
@mnaser now 1 test case is failing in conformance test and it is an upstream issue cilium/cilium#14287 |
b4fdce2
to
bc5c370
Compare
@okozachenko1203 Can you try and see if we enable the kube-proxy replcement mode will pass all tests? |
@mnaser this means we have to provision the capi cluster without kube-proxy first. Then, enable kube-proxy replacement mode in the cilium chart's values. |
following this guide, we need to specify kube api server ip and port as helm values. But I don't think we can get them in advance. |
@okozachenko1203 hmm, that would be quite the challenge to do this in that case, because I think this is quite a little more involved than "just a small change of Helm values" to enable this option. |
@okozachenko1203 Can you see how difficult it might be to enable Cilium to make this change happen with kube-proxy replacement out of the box? I think we'll have to look into some sort of 'reconcilation' loop.. |
yeah, provisioning the cluster without kube-proxy will be simple by leveraging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cni-chaining-mode: portmap
enable-session-affinity: 'true'
@okozachenko1203 can you just use these instead and keep things as the normal system?
reference: cilium/cilium#14287 (comment)
For users who run with kube-proxy (i.e. with Cilium's kube-proxy replacement disabled), the ClusterIP service loadbalancing when a request is sent from a pod running in a non-host network namespace is still performed at the pod network interface (until cilium/cilium#16197 is fixed). For this case the session affinity support is disabled by default.
Failure of `[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]` is expected until cilium/cilium#14287 is fixed
bc5c370
to
33f8eda
Compare
with this portmap mode, sonobuoy passed for cilium cni |
fix #124