diff --git a/CHANGELOG.md b/CHANGELOG.md index 3a97fc086f..3bca631063 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,23 @@ +# Release v3.1.0 (2019-05-17) + +### VPP + - version **v19.01** (latest stable/1901) + +### New Features & Enhancements + - update to vpp-agent `v2.1.0` + - [multi-master](docs/setup/MULTI_MASTER.md) and [external ETCD](docs/setup/EXTERNAL_ETCD.md) support + - experimental [SRv6 implementation of k8s services](docs/setup/SRV6.md) + - experimental support for [multiple pod interfaces](docs/operation/CUSTOM_POD_INTERFACES.md), + including memif interfaces + - load-balancing between backends of a service now supports unlimited backend pod count + +### Known Issues + - (IPv6 only): service load-balancing in IPv6 setup is not equal, node-local backend pods are always + preferred and a request is never load-balanced to a remote node's pod if there is a local backend + - (IPv6 only): network Policies are implemented using ip6tables rules in individual pods. Because of + this, the policy programming is a bit slower (compared to policy programming on VPP for IPv4) + + # Release v3.0.1 (2019-04-08) ### VPP diff --git a/docs/dev-guide/SERVICES.md b/docs/dev-guide/SERVICES.md index 0aeee2833e..0bbacef019 100644 --- a/docs/dev-guide/SERVICES.md +++ b/docs/dev-guide/SERVICES.md @@ -505,139 +505,9 @@ periodically cleaning up inactive NAT sessions. ![NAT configuration example][nat-configuration-diagram] #### SRv6 Renderer - The SRv6 Renderer maps `ContivService` instances into the corresponding [SRv6 model][srv6-model] -instances that are then installed into VPP by the Ligato/vpp-agent. The SRv6 is -Segment routing on IPv6 protocol (see [srv6][srv6-ietf]), therefore you must enable -the [IPv6 support][ipv6-setup] in Contiv to be able to use the SRv6 renderer. Also you must enable -it in `manifest.yaml` together with noOverlay mode: -``` -apiVersion: v1 -kind: ConfigMap -metadata: - name: contiv-agent-cfg - namespace: kube-system -data: - contiv.conf: |- - useNoOverlay: true - useSRv6Interconnect: true - ... -``` -`manifest.yaml` file is generated by Helm, so you can alternatively change the helm template -input values in [values.yaml][values.yaml] (or [values-arm64.yaml][values-arm64.yaml]) -``` -... -contiv: - useNoOverlay: false - useSRv6Interconnect: false - ... -``` -and generate `manifest.yaml` with Helm. - - -The basic idea behind segment routing is to cut the packet route into smaller routes, called segments. -The SRv6 implementation of service will use these segments to properly route packet to service backends. -The packet flow looks like this: -![srv6-renderer-communication-from-pod][srv6-renderer-communication-from-pod] - -The user of service is pod 1. The packet with destination of the service IP address is steered into the SRv6 -policy. The policy contains 1 path (list of segments) to each backend. The weighted loadbalancing happens -(all routes have equal waight) and one segment list is used. -- if pod 2 on node 1 is chosen: -The route consists only of one segment, the segment with segment id starting with 6666 (segment id is an ipv6 address). -The IPv6 routing forwards the packet to the segment end (LocalSid-DX6) that decapsulates the packet (it was encapsulated in the policy, think of it as tunnel) -and crossconnect it to the interface to pod 2 (using IPv6 as next hop) -- if the host backend on node 1 is chosen: -Basically the same, but IPv6 routing forwards it to different place where LocalSid is located. -- if pod2 on node 2 is chosen: -The route consists of 2 segments. The first segment will transport the packet to correct node -(segment end in Localsid-End), but the segment end will not decapsulate packet but route it -to the next segment end. The second segment end will decapsulate the packet and route it to the correct -backend pod as in previous case. -- if the host on node 2 is chosen: -Similar to previous cases. - -Special case for SRv6 service is when service is used from host: -![srv6-renderer-communication-from-host][srv6-renderer-communication-from-host] - -The loadbalancing is not done by using SRv6, but is done in the k8s proxy. So basically we fallback -to the ipv6 routing to chosen backend. In case of local backend, the ipv6 routing will handle it without -using the srv6 components. In the case of remote backend, the srv6 is used to transport packet to the correct -node, but there the pure ipv6 takes routing to the correct backend. - -The path of the packet returning from the backend in all the SRv6 service cases looks basically as in the host special -case when it went to the remote backend: the srv6 handles only the node-to-node communication and -the rest is handled by the pure ipv6 routing. - -In case of problems, you can check the vswitch logs for the setting of steering, policy and localsids in the transactions: -``` - - key: config/vpp/srv6/v2/localsid/6666:0:0:1::5 - val: { sid:"6666:0:0:1::5" installation_vrf_id:1 end_function_DX6: } - - key: config/vpp/srv6/v2/policy/5555::d765 - val: { bsid:"5555::d765" srh_encapsulation:true segment_lists: } - - key: config/vpp/srv6/v2/steering/forK8sService-default-myservice - val: { name:"forK8sService-default-myservice" policy_bsid:"5555::d765" l3_traffic: } -``` -or look directly into the vpp using CLI and list the installed srv6 components: -``` -vpp# sh sr steering-policies -SR steering policies: -Traffic SR policy BSID -L3 2096::a/128 5555::a -L3 2096::5ce9/128 5555::5ce9 -L3 2096::1/128 5555::1 -L3 2096::6457/128 5555::6457 -L3 2096::d765/128 5555::d765 -``` -``` -vpp# sh sr policies -SR policies: -[0].- BSID: 5555::a - Behavior: Encapsulation - Type: Default - FIB table: 0 - Segment Lists: - [0].- < 6666:0:0:1::2 > weight: 1 - [1].- < 6666:0:0:1::3 > weight: 1 ------------ -[1].- BSID: 5555::5ce9 - Behavior: Encapsulation - Type: Default - FIB table: 0 - Segment Lists: - [2].- < 6655::1 > weight: 1 ------------ -... ------------ -[4].- BSID: 5555::d765 - Behavior: Encapsulation - Type: Default - FIB table: 0 - Segment Lists: - [5].- < 6666:0:0:1::5 > weight: 1 - [6].- < 6666:0:0:1::6 > weight: 1 ------------ -... -``` -``` -vpp# sh sr localsids -SRv6 - My LocalSID Table: -========================= - Address: 7766:f00d::1 - Behavior: End - Good traffic: [0 packets : 0 bytes] - Bad traffic: [0 packets : 0 bytes] --------------------- -... --------------------- - Address: 6666:0:0:1::5 - Behavior: DX6 (Endpoint with decapsulation and IPv6 cross-connect) - Iface: tap4 - Next hop: 2001:0:0:1::5 - Good traffic: [0 packets : 0 bytes] - Bad traffic: [0 packets : 0 bytes] --------------------- -``` +instances that are then installed into VPP by the Ligato vpp Agent. See the [SRv6 README](../setup/SRV6.md) +for more details on how SRv6 k8s service rendering works. [layers-diagram]: services/service-plugin-layers.png "Layering of the Service plugin" [nat-configuration-diagram]: services/nat-configuration.png "NAT configuration example" @@ -673,10 +543,3 @@ SRv6 - My LocalSID Table: [event-loop-guide]: EVENT_LOOP.md [event-handler]: EVENT_LOOP.md#event-handler [db-resources]: https://github.com/contiv/vpp/tree/master/dbresources -[srv6-ietf]: https://tools.ietf.org/html/rfc8402#section-8.2 -[ipv6-setup]: https://github.com/contiv/vpp/blob/master/docs/setup/IPV6.md -[values.yaml]: https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values.yaml -[values-arm64.yaml]: https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values-arm64.yaml -[srv6-renderer-communication-from-pod]: services/srv6-renderer-communication-from-pod.png "SRv6 service communication originating in pod" -[srv6-renderer-communication-from-host]: services/srv6-renderer-communication-from-host.png "SRv6 service communication originating in host" - diff --git a/docs/operation/CUSTOM_POD_INTERFACES.md b/docs/operation/CUSTOM_POD_INTERFACES.md index 21e07db0b3..82279b9fc4 100644 --- a/docs/operation/CUSTOM_POD_INTERFACES.md +++ b/docs/operation/CUSTOM_POD_INTERFACES.md @@ -13,7 +13,9 @@ be one of the 3 supported types: Custom interfaces can be requested using annotations in pod definition. The name of the annotation is `contivpp.io/custom-if` and its value can be a comma-separated list of custom interfaces in the `//` -format (network part is optional). +format. The `` part is optional, leaving it unspecified means the default pod +network. Apart from the default pod network, only a special `stub` network is +currently supported, which leaves the interface without any IP address and routes pointing to it. An example of a pod definition that connects a pod with a default interface plus one extra tap interface with the name `tap1` and one extra veth interface with the name `veth1`: @@ -102,7 +104,7 @@ To request auto-configuration, the pod definition needs to be extended with the the vpp-agent running inside of the pod uses to identify its config subtree in ETCD. The following is a complex example of a pod which runs VPP + agent inside of it, requests -2 memif interfaces and their auto-configuration: +1 memif interface and its auto-configuration: ```yaml apiVersion: v1 @@ -127,7 +129,7 @@ kind: Pod metadata: name: vnf-pod annotations: - contivpp.io/custom-if: memif1/memif, memif2/memif + contivpp.io/custom-if: memif1/memif contivpp.io/microservice-label: vnf1 spec: containers: @@ -140,7 +142,7 @@ spec: value: vnf1 resources: limits: - contivpp.io/memif: 2 + contivpp.io/memif: 1 volumeMounts: - name: etcd-cfg mountPath: /etc/etcd @@ -150,8 +152,8 @@ spec: name: etcd-cfg ``` -Exploring the VPP state of this pod using vppctl would show two auto-configured memif -interfaces connected to the vswitch VPP: +Exploring the VPP state of this pod using vppctl would show an auto-configured memif +interface connected to the vswitch VPP: ```bash $ kubectl exec -it vnf-pod -- vppctl -s :5002 @@ -163,7 +165,6 @@ $ kubectl exec -it vnf-pod -- vppctl -s :5002 vpp# sh inter Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count local0 0 down 0/0/0/0 -memif0/0 1 up 9000/0/0/0 -memif0/1 2 up 9000/0/0/0 +memif0/0 1 up 9000/0/0/0 vpp# ``` \ No newline at end of file diff --git a/docs/setup/IPV6.md b/docs/setup/IPV6.md index 86c301ec1a..848d5aa1f3 100644 --- a/docs/setup/IPV6.md +++ b/docs/setup/IPV6.md @@ -13,6 +13,7 @@ - Service load-balancing in IPv6 setup is not equal, node-local backend pods are always preferred and a request is never load-balanced to a remote node's pod if there is a local backend. +This is addressed in the [experimental SRv6 implementation of k8s services](SRV6.md). - Network Policies are implemented using ip6tables rules in individual pods. Because of this, the policy programming is a bit slower (compared to policy programming on VPP for IPv4). @@ -94,6 +95,8 @@ you can pass the the above mentioned IPAM setting as helm options, e.g.: --set contiv.ipamConfig.vxlanCIDR=2005::/112 --set contiv.ipamConfig.serviceCIDR=2096::/110 ``` +Note: for experimental SRv6 implementation of k8s services, see the [SRv6 README](SRV6.md). + ## Deployment Verification After some time, all PODs should enter the running state. All PODs should have an IPv6 address diff --git a/docs/setup/SRV6.md b/docs/setup/SRV6.md new file mode 100644 index 0000000000..3dd0b47be6 --- /dev/null +++ b/docs/setup/SRV6.md @@ -0,0 +1,141 @@ +# SRv6 (Segment Routing on IPv6) Implementation of K8s Services +[SRv6][srv6-ietf] provides an experimental way of implementing k8s services in IPv6 deployments of Contiv. + +Since SRv6 is Segment Routing on IPv6 protocol (see [RFC8402][srv6-ietf]), you must enable +the [IPv6][ipv6-setup] in Contiv to be able to use the SRv6 service renderer. + +Additionally, you must enable `SRv6Interconnect` in the `manifest.yaml` together with `noOverlay` mode: +``` +apiVersion: v1 +kind: ConfigMap +metadata: + name: contiv-agent-cfg + namespace: kube-system +data: + contiv.conf: |- + useNoOverlay: true + useSRv6Interconnect: true + ... +``` +`manifest.yaml` file is generated by Helm, so you can alternatively change the helm template +input values in [values.yaml][values.yaml] (or [values-arm64.yaml][values-arm64.yaml]) +``` +... +contiv: + useNoOverlay: false + useSRv6Interconnect: false + ... +``` +and generate `manifest.yaml` with Helm. + + +The basic idea behind segment routing is to cut the packet route into smaller routes, called segments. +The SRv6 implementation of service will use these segments to properly route packet to service backends. +The packet flow looks like this: +![srv6-renderer-communication-from-pod][srv6-renderer-communication-from-pod] + +The user of service is pod 1. The packet with destination of the service IP address is steered into the SRv6 +policy. The policy contains 1 path (list of segments) to each backend. The weighted loadbalancing happens +(all routes have equal waight) and one segment list is used. +- if pod 2 on node 1 is chosen: +The route consists only of one segment, the segment with segment id starting with 6666 (segment id is an ipv6 address). +The IPv6 routing forwards the packet to the segment end (LocalSid-DX6) that decapsulates the packet (it was encapsulated in the policy, think of it as tunnel) +and crossconnect it to the interface to pod 2 (using IPv6 as next hop) +- if the host backend on node 1 is chosen: +Basically the same, but IPv6 routing forwards it to different place where LocalSid is located. +- if pod2 on node 2 is chosen: +The route consists of 2 segments. The first segment will transport the packet to correct node +(segment end in Localsid-End), but the segment end will not decapsulate packet but route it +to the next segment end. The second segment end will decapsulate the packet and route it to the correct +backend pod as in previous case. +- if the host on node 2 is chosen: +Similar to previous cases. + +Special case for SRv6 service is when service is used from host: +![srv6-renderer-communication-from-host][srv6-renderer-communication-from-host] + +The loadbalancing is not done by using SRv6, but is done in the k8s proxy. So basically we fallback +to the ipv6 routing to chosen backend. In case of local backend, the ipv6 routing will handle it without +using the srv6 components. In the case of remote backend, the srv6 is used to transport packet to the correct +node, but there the pure ipv6 takes routing to the correct backend. + +The path of the packet returning from the backend in all the SRv6 service cases looks basically as in the host special +case when it went to the remote backend: the srv6 handles only the node-to-node communication and +the rest is handled by the pure ipv6 routing. + +In case of problems, you can check the vswitch logs for the setting of steering, policy and localsids in the transactions: +``` + - key: config/vpp/srv6/v2/localsid/6666:0:0:1::5 + val: { sid:"6666:0:0:1::5" installation_vrf_id:1 end_function_DX6: } + - key: config/vpp/srv6/v2/policy/5555::d765 + val: { bsid:"5555::d765" srh_encapsulation:true segment_lists: } + - key: config/vpp/srv6/v2/steering/forK8sService-default-myservice + val: { name:"forK8sService-default-myservice" policy_bsid:"5555::d765" l3_traffic: } +``` +or look directly into the vpp using CLI and list the installed srv6 components: +``` +vpp# sh sr steering-policies +SR steering policies: +Traffic SR policy BSID +L3 2096::a/128 5555::a +L3 2096::5ce9/128 5555::5ce9 +L3 2096::1/128 5555::1 +L3 2096::6457/128 5555::6457 +L3 2096::d765/128 5555::d765 +``` +``` +vpp# sh sr policies +SR policies: +[0].- BSID: 5555::a + Behavior: Encapsulation + Type: Default + FIB table: 0 + Segment Lists: + [0].- < 6666:0:0:1::2 > weight: 1 + [1].- < 6666:0:0:1::3 > weight: 1 +----------- +[1].- BSID: 5555::5ce9 + Behavior: Encapsulation + Type: Default + FIB table: 0 + Segment Lists: + [2].- < 6655::1 > weight: 1 +----------- +... +----------- +[4].- BSID: 5555::d765 + Behavior: Encapsulation + Type: Default + FIB table: 0 + Segment Lists: + [5].- < 6666:0:0:1::5 > weight: 1 + [6].- < 6666:0:0:1::6 > weight: 1 +----------- +... +``` +``` +vpp# sh sr localsids +SRv6 - My LocalSID Table: +========================= + Address: 7766:f00d::1 + Behavior: End + Good traffic: [0 packets : 0 bytes] + Bad traffic: [0 packets : 0 bytes] +-------------------- +... +-------------------- + Address: 6666:0:0:1::5 + Behavior: DX6 (Endpoint with decapsulation and IPv6 cross-connect) + Iface: tap4 + Next hop: 2001:0:0:1::5 + Good traffic: [0 packets : 0 bytes] + Bad traffic: [0 packets : 0 bytes] +-------------------- +``` + +[srv6-ietf]: https://tools.ietf.org/html/rfc8402#section-8.2 +[ipv6-setup]: IPV6.md +[values.yaml]: ../../k8s/contiv-vpp/values.yaml +[values-arm64.yaml]: https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values-arm64.yaml +[srv6-renderer-communication-from-pod]: ../dev-guide/services/srv6-renderer-communication-from-pod.png "SRv6 service communication originating in pod" +[srv6-renderer-communication-from-host]: ../dev-guide/services/srv6-renderer-communication-from-host.png "SRv6 service communication originating in host"