Skip to content

Commit

Permalink
Docs & changelog updates
Browse files Browse the repository at this point in the history
Signed-off-by: Rastislav Szabo <[email protected]>
  • Loading branch information
rastislavs committed May 16, 2019
1 parent e780102 commit 64ed86b
Show file tree
Hide file tree
Showing 5 changed files with 175 additions and 147 deletions.
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,23 @@
# Release v3.1.0 (2019-05-17)

### VPP
- version **v19.01** (latest stable/1901)

### New Features & Enhancements
- update to vpp-agent `v2.1.0`
- [multi-master](docs/setup/MULTI_MASTER.md) and [external ETCD](docs/setup/EXTERNAL_ETCD.md) support
- experimental [SRv6 implementation of k8s services](docs/setup/SRV6.md)
- experimental support for [multiple pod interfaces](docs/operation/CUSTOM_POD_INTERFACES.md),
including memif interfaces
- load-balancing between backends of a service now supports unlimited backend pod count

### Known Issues
- (IPv6 only): service load-balancing in IPv6 setup is not equal, node-local backend pods are always
preferred and a request is never load-balanced to a remote node's pod if there is a local backend
- (IPv6 only): network Policies are implemented using ip6tables rules in individual pods. Because of
this, the policy programming is a bit slower (compared to policy programming on VPP for IPv4)


# Release v3.0.1 (2019-04-08)

### VPP
Expand Down
141 changes: 2 additions & 139 deletions docs/dev-guide/SERVICES.md
Original file line number Diff line number Diff line change
Expand Up @@ -505,139 +505,9 @@ periodically cleaning up inactive NAT sessions.
![NAT configuration example][nat-configuration-diagram]

#### SRv6 Renderer

The SRv6 Renderer maps `ContivService` instances into the corresponding [SRv6 model][srv6-model]
instances that are then installed into VPP by the Ligato/vpp-agent. The SRv6 is
Segment routing on IPv6 protocol (see [srv6][srv6-ietf]), therefore you must enable
the [IPv6 support][ipv6-setup] in Contiv to be able to use the SRv6 renderer. Also you must enable
it in `manifest.yaml` together with noOverlay mode:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: contiv-agent-cfg
namespace: kube-system
data:
contiv.conf: |-
useNoOverlay: true
useSRv6Interconnect: true
...
```
`manifest.yaml` file is generated by Helm, so you can alternatively change the helm template
input values in [values.yaml][values.yaml] (or [values-arm64.yaml][values-arm64.yaml])
```
...
contiv:
useNoOverlay: false
useSRv6Interconnect: false
...
```
and generate `manifest.yaml` with Helm.


The basic idea behind segment routing is to cut the packet route into smaller routes, called segments.
The SRv6 implementation of service will use these segments to properly route packet to service backends.
The packet flow looks like this:
![srv6-renderer-communication-from-pod][srv6-renderer-communication-from-pod]

The user of service is pod 1. The packet with destination of the service IP address is steered into the SRv6
policy. The policy contains 1 path (list of segments) to each backend. The weighted loadbalancing happens
(all routes have equal waight) and one segment list is used.
- if pod 2 on node 1 is chosen:
The route consists only of one segment, the segment with segment id starting with 6666 (segment id is an ipv6 address).
The IPv6 routing forwards the packet to the segment end (LocalSid-DX6) that decapsulates the packet (it was encapsulated in the policy, think of it as tunnel)
and crossconnect it to the interface to pod 2 (using IPv6 as next hop)
- if the host backend on node 1 is chosen:
Basically the same, but IPv6 routing forwards it to different place where LocalSid is located.
- if pod2 on node 2 is chosen:
The route consists of 2 segments. The first segment will transport the packet to correct node
(segment end in Localsid-End), but the segment end will not decapsulate packet but route it
to the next segment end. The second segment end will decapsulate the packet and route it to the correct
backend pod as in previous case.
- if the host on node 2 is chosen:
Similar to previous cases.

Special case for SRv6 service is when service is used from host:
![srv6-renderer-communication-from-host][srv6-renderer-communication-from-host]

The loadbalancing is not done by using SRv6, but is done in the k8s proxy. So basically we fallback
to the ipv6 routing to chosen backend. In case of local backend, the ipv6 routing will handle it without
using the srv6 components. In the case of remote backend, the srv6 is used to transport packet to the correct
node, but there the pure ipv6 takes routing to the correct backend.

The path of the packet returning from the backend in all the SRv6 service cases looks basically as in the host special
case when it went to the remote backend: the srv6 handles only the node-to-node communication and
the rest is handled by the pure ipv6 routing.

In case of problems, you can check the vswitch logs for the setting of steering, policy and localsids in the transactions:
```
- key: config/vpp/srv6/v2/localsid/6666:0:0:1::5
val: { sid:"6666:0:0:1::5" installation_vrf_id:1 end_function_DX6:<outgoing_interface:"vpp-tap-d6087568be9f59aba028955c19f5684055a69d926ed89f720fe187a" next_hop:"2001:0:0:1::5" > }
- key: config/vpp/srv6/v2/policy/5555::d765
val: { bsid:"5555::d765" srh_encapsulation:true segment_lists:<weight:1 segments:"6666:0:0:1::5" > }
- key: config/vpp/srv6/v2/steering/forK8sService-default-myservice
val: { name:"forK8sService-default-myservice" policy_bsid:"5555::d765" l3_traffic:<installation_vrf_id:1 prefix_address:"2096::d765/128" > }
```
or look directly into the vpp using CLI and list the installed srv6 components:
```
vpp# sh sr steering-policies
SR steering policies:
Traffic SR policy BSID
L3 2096::a/128 5555::a
L3 2096::5ce9/128 5555::5ce9
L3 2096::1/128 5555::1
L3 2096::6457/128 5555::6457
L3 2096::d765/128 5555::d765
```
```
vpp# sh sr policies
SR policies:
[0].- BSID: 5555::a
Behavior: Encapsulation
Type: Default
FIB table: 0
Segment Lists:
[0].- < 6666:0:0:1::2 > weight: 1
[1].- < 6666:0:0:1::3 > weight: 1
-----------
[1].- BSID: 5555::5ce9
Behavior: Encapsulation
Type: Default
FIB table: 0
Segment Lists:
[2].- < 6655::1 > weight: 1
-----------
...
-----------
[4].- BSID: 5555::d765
Behavior: Encapsulation
Type: Default
FIB table: 0
Segment Lists:
[5].- < 6666:0:0:1::5 > weight: 1
[6].- < 6666:0:0:1::6 > weight: 1
-----------
...
```
```
vpp# sh sr localsids
SRv6 - My LocalSID Table:
=========================
Address: 7766:f00d::1
Behavior: End
Good traffic: [0 packets : 0 bytes]
Bad traffic: [0 packets : 0 bytes]
--------------------
...
--------------------
Address: 6666:0:0:1::5
Behavior: DX6 (Endpoint with decapsulation and IPv6 cross-connect)
Iface: tap4
Next hop: 2001:0:0:1::5
Good traffic: [0 packets : 0 bytes]
Bad traffic: [0 packets : 0 bytes]
--------------------
```
instances that are then installed into VPP by the Ligato vpp Agent. See the [SRv6 README](../setup/SRV6.md)
for more details on how SRv6 k8s service rendering works.

[layers-diagram]: services/service-plugin-layers.png "Layering of the Service plugin"
[nat-configuration-diagram]: services/nat-configuration.png "NAT configuration example"
Expand Down Expand Up @@ -673,10 +543,3 @@ SRv6 - My LocalSID Table:
[event-loop-guide]: EVENT_LOOP.md
[event-handler]: EVENT_LOOP.md#event-handler
[db-resources]: https://github.com/contiv/vpp/tree/master/dbresources
[srv6-ietf]: https://tools.ietf.org/html/rfc8402#section-8.2
[ipv6-setup]: https://github.com/contiv/vpp/blob/master/docs/setup/IPV6.md
[values.yaml]: https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values.yaml
[values-arm64.yaml]: https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values-arm64.yaml
[srv6-renderer-communication-from-pod]: services/srv6-renderer-communication-from-pod.png "SRv6 service communication originating in pod"
[srv6-renderer-communication-from-host]: services/srv6-renderer-communication-from-host.png "SRv6 service communication originating in host"

17 changes: 9 additions & 8 deletions docs/operation/CUSTOM_POD_INTERFACES.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@ be one of the 3 supported types:
Custom interfaces can be requested using annotations in pod definition. The name
of the annotation is `contivpp.io/custom-if` and its value can be a comma-separated
list of custom interfaces in the `<custom-interface-name>/<interface-type>/<network>`
format (network part is optional).
format. The `<network>` part is optional, leaving it unspecified means the default pod
network. Apart from the default pod network, only a special `stub` network is
currently supported, which leaves the interface without any IP address and routes pointing to it.

An example of a pod definition that connects a pod with a default interface plus one
extra tap interface with the name `tap1` and one extra veth interface with the name `veth1`:
Expand Down Expand Up @@ -102,7 +104,7 @@ To request auto-configuration, the pod definition needs to be extended with the
the vpp-agent running inside of the pod uses to identify its config subtree in ETCD.

The following is a complex example of a pod which runs VPP + agent inside of it, requests
2 memif interfaces and their auto-configuration:
1 memif interface and its auto-configuration:

```yaml
apiVersion: v1
Expand All @@ -127,7 +129,7 @@ kind: Pod
metadata:
name: vnf-pod
annotations:
contivpp.io/custom-if: memif1/memif, memif2/memif
contivpp.io/custom-if: memif1/memif
contivpp.io/microservice-label: vnf1
spec:
containers:
Expand All @@ -140,7 +142,7 @@ spec:
value: vnf1
resources:
limits:
contivpp.io/memif: 2
contivpp.io/memif: 1
volumeMounts:
- name: etcd-cfg
mountPath: /etc/etcd
Expand All @@ -150,8 +152,8 @@ spec:
name: etcd-cfg
```

Exploring the VPP state of this pod using vppctl would show two auto-configured memif
interfaces connected to the vswitch VPP:
Exploring the VPP state of this pod using vppctl would show an auto-configured memif
interface connected to the vswitch VPP:

```bash
$ kubectl exec -it vnf-pod -- vppctl -s :5002
Expand All @@ -163,7 +165,6 @@ $ kubectl exec -it vnf-pod -- vppctl -s :5002
vpp# sh inter
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
local0 0 down 0/0/0/0
memif0/0 1 up 9000/0/0/0
memif0/1 2 up 9000/0/0/0
memif0/0 1 up 9000/0/0/0
vpp#
```
3 changes: 3 additions & 0 deletions docs/setup/IPV6.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@

- Service load-balancing in IPv6 setup is not equal, node-local backend pods are always preferred
and a request is never load-balanced to a remote node's pod if there is a local backend.
This is addressed in the [experimental SRv6 implementation of k8s services](SRV6.md).
- Network Policies are implemented using ip6tables rules in individual pods. Because of
this, the policy programming is a bit slower (compared to policy programming on VPP for IPv4).

Expand Down Expand Up @@ -94,6 +95,8 @@ you can pass the the above mentioned IPAM setting as helm options, e.g.:
--set contiv.ipamConfig.vxlanCIDR=2005::/112 --set contiv.ipamConfig.serviceCIDR=2096::/110
```

Note: for experimental SRv6 implementation of k8s services, see the [SRv6 README](SRV6.md).


## Deployment Verification
After some time, all PODs should enter the running state. All PODs should have an IPv6 address
Expand Down
141 changes: 141 additions & 0 deletions docs/setup/SRV6.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
# SRv6 (Segment Routing on IPv6) Implementation of K8s Services
[SRv6][srv6-ietf] provides an experimental way of implementing k8s services in IPv6 deployments of Contiv.

Since SRv6 is Segment Routing on IPv6 protocol (see [RFC8402][srv6-ietf]), you must enable
the [IPv6][ipv6-setup] in Contiv to be able to use the SRv6 service renderer.

Additionally, you must enable `SRv6Interconnect` in the `manifest.yaml` together with `noOverlay` mode:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: contiv-agent-cfg
namespace: kube-system
data:
contiv.conf: |-
useNoOverlay: true
useSRv6Interconnect: true
...
```
`manifest.yaml` file is generated by Helm, so you can alternatively change the helm template
input values in [values.yaml][values.yaml] (or [values-arm64.yaml][values-arm64.yaml])
```
...
contiv:
useNoOverlay: false
useSRv6Interconnect: false
...
```
and generate `manifest.yaml` with Helm.


The basic idea behind segment routing is to cut the packet route into smaller routes, called segments.
The SRv6 implementation of service will use these segments to properly route packet to service backends.
The packet flow looks like this:
![srv6-renderer-communication-from-pod][srv6-renderer-communication-from-pod]

The user of service is pod 1. The packet with destination of the service IP address is steered into the SRv6
policy. The policy contains 1 path (list of segments) to each backend. The weighted loadbalancing happens
(all routes have equal waight) and one segment list is used.
- if pod 2 on node 1 is chosen:
The route consists only of one segment, the segment with segment id starting with 6666 (segment id is an ipv6 address).
The IPv6 routing forwards the packet to the segment end (LocalSid-DX6) that decapsulates the packet (it was encapsulated in the policy, think of it as tunnel)
and crossconnect it to the interface to pod 2 (using IPv6 as next hop)
- if the host backend on node 1 is chosen:
Basically the same, but IPv6 routing forwards it to different place where LocalSid is located.
- if pod2 on node 2 is chosen:
The route consists of 2 segments. The first segment will transport the packet to correct node
(segment end in Localsid-End), but the segment end will not decapsulate packet but route it
to the next segment end. The second segment end will decapsulate the packet and route it to the correct
backend pod as in previous case.
- if the host on node 2 is chosen:
Similar to previous cases.

Special case for SRv6 service is when service is used from host:
![srv6-renderer-communication-from-host][srv6-renderer-communication-from-host]

The loadbalancing is not done by using SRv6, but is done in the k8s proxy. So basically we fallback
to the ipv6 routing to chosen backend. In case of local backend, the ipv6 routing will handle it without
using the srv6 components. In the case of remote backend, the srv6 is used to transport packet to the correct
node, but there the pure ipv6 takes routing to the correct backend.

The path of the packet returning from the backend in all the SRv6 service cases looks basically as in the host special
case when it went to the remote backend: the srv6 handles only the node-to-node communication and
the rest is handled by the pure ipv6 routing.

In case of problems, you can check the vswitch logs for the setting of steering, policy and localsids in the transactions:
```
- key: config/vpp/srv6/v2/localsid/6666:0:0:1::5
val: { sid:"6666:0:0:1::5" installation_vrf_id:1 end_function_DX6:<outgoing_interface:"vpp-tap-d6087568be9f59aba028955c19f5684055a69d926ed89f720fe187a" next_hop:"2001:0:0:1::5" > }
- key: config/vpp/srv6/v2/policy/5555::d765
val: { bsid:"5555::d765" srh_encapsulation:true segment_lists:<weight:1 segments:"6666:0:0:1::5" > }
- key: config/vpp/srv6/v2/steering/forK8sService-default-myservice
val: { name:"forK8sService-default-myservice" policy_bsid:"5555::d765" l3_traffic:<installation_vrf_id:1 prefix_address:"2096::d765/128" > }
```
or look directly into the vpp using CLI and list the installed srv6 components:
```
vpp# sh sr steering-policies
SR steering policies:
Traffic SR policy BSID
L3 2096::a/128 5555::a
L3 2096::5ce9/128 5555::5ce9
L3 2096::1/128 5555::1
L3 2096::6457/128 5555::6457
L3 2096::d765/128 5555::d765
```
```
vpp# sh sr policies
SR policies:
[0].- BSID: 5555::a
Behavior: Encapsulation
Type: Default
FIB table: 0
Segment Lists:
[0].- < 6666:0:0:1::2 > weight: 1
[1].- < 6666:0:0:1::3 > weight: 1
-----------
[1].- BSID: 5555::5ce9
Behavior: Encapsulation
Type: Default
FIB table: 0
Segment Lists:
[2].- < 6655::1 > weight: 1
-----------
...
-----------
[4].- BSID: 5555::d765
Behavior: Encapsulation
Type: Default
FIB table: 0
Segment Lists:
[5].- < 6666:0:0:1::5 > weight: 1
[6].- < 6666:0:0:1::6 > weight: 1
-----------
...
```
```
vpp# sh sr localsids
SRv6 - My LocalSID Table:
=========================
Address: 7766:f00d::1
Behavior: End
Good traffic: [0 packets : 0 bytes]
Bad traffic: [0 packets : 0 bytes]
--------------------
...
--------------------
Address: 6666:0:0:1::5
Behavior: DX6 (Endpoint with decapsulation and IPv6 cross-connect)
Iface: tap4
Next hop: 2001:0:0:1::5
Good traffic: [0 packets : 0 bytes]
Bad traffic: [0 packets : 0 bytes]
--------------------
```

[srv6-ietf]: https://tools.ietf.org/html/rfc8402#section-8.2
[ipv6-setup]: IPV6.md
[values.yaml]: ../../k8s/contiv-vpp/values.yaml
[values-arm64.yaml]: https://github.com/contiv/vpp/blob/master/k8s/contiv-vpp/values-arm64.yaml
[srv6-renderer-communication-from-pod]: ../dev-guide/services/srv6-renderer-communication-from-pod.png "SRv6 service communication originating in pod"
[srv6-renderer-communication-from-host]: ../dev-guide/services/srv6-renderer-communication-from-host.png "SRv6 service communication originating in host"

0 comments on commit 64ed86b

Please sign in to comment.