From cf640ef1f8be14b99bedf6db7e64687ff059160f Mon Sep 17 00:00:00 2001 From: grampelberg Date: Tue, 30 Apr 2019 11:43:10 -0700 Subject: [PATCH 1/5] Move APIs into separate documents --- README.md | 30 +- specification.md => traffic-metrics.md | 394 +------------------------ traffic-policy.md | 306 +++++++++++++++++++ traffic-split.md | 330 +++++++++++++++++++++ 4 files changed, 668 insertions(+), 392 deletions(-) rename specification.md => traffic-metrics.md (56%) create mode 100644 traffic-policy.md create mode 100644 traffic-split.md diff --git a/README.md b/README.md index 72aed0f..0bb1d40 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,23 @@ of providers. This allows for both standardization for end-users and innovation by providers of Service Mesh Technology. It enables flexibility and interoperability. +This specification consists of three APIs: + +* [Traffic Policy](traffic-policy.md) - configure access to specific pods and + routes based on the identity of a client for locking down applications to only + allowed users and services. +* [Traffic Split](traffic-split.md) - incrementally direct percentages of + traffic between various services to assist in building out canary rollouts. +* [Traffic Metrics](traffic-metrics.md) - expose common traffic metrics for use + by tools such as dashboards and autoscalers. + +See the individual documents for the details. Each document outlines: + +* Specification +* Possible use cases +* Example implementations +* Tradeoffs + ### Technical Overview The SMI is specified as a collection of Kubernetes Custom Resource Definitions @@ -38,16 +55,3 @@ useful subset. If SMI providers want to add provider specific extensions and APIs beyond the SMI spec, they are welcome to do so We expect that, over time, as more functionality becomes commonly accepted as part of what it means to be a Service Mesh, those definitions will migrate into the SMI specification. - -### Specification - -The SMI specification outlines three basic resource types: - -* MutualTLS - a resource for managing and configuring encryption between services -* TrafficSplit - a resource for splitting traffic between different backends. - The primary use case is executing canary rollouts of new applications - versions. -* TrafficMetrics - a resource that normalizes the metrics surfaced by - implementations. - -The details of the APIs can be founded in [Specification.md](specification.md) diff --git a/specification.md b/traffic-metrics.md similarity index 56% rename from specification.md rename to traffic-metrics.md index 43fc188..7d09a20 100644 --- a/specification.md +++ b/traffic-metrics.md @@ -1,368 +1,4 @@ -## SMI Specification - -### Overview - -The SMI specification consists of three APIs: - -* Mutual TLS - Used for implementing encryption between Pods -* Canary - Used for traffic shapping between versions of a service -* Sidecar - Used for injecting sidecars (eg Monitoring) onto applications. - -### Mutual TLS - -This resource is used to inject a sidecar that performs Mutual TLS. - -This results in every Pod that matches the selector driving their traffic -through TLS with the subject names specified in the spec. - -```yaml -apiVersion: v1beta1 -kind: TLSConfig -name: my-tls-config -spec: - # Subject names in the Certificate - subjectNames: - - foo - - bar - # Selector to identify Pods that have this TLS - selector: - matchLabels: - role: frontend - stage: production -``` - -### Traffic Split - -This resource allows users to incrementally direct percentages of traffic -between various services. It will be used by *clients* such as ingress -controllers or service mesh sidecars to split the outgoing traffic to different -destinations. - -Integrations can use this resource to orchestrate canary releases for new -versions of software. The resource itself is not a complete solution as there -must be some kind of controller managing the traffic shifting over time. -Weighting traffic between various services is also more generally useful than -driving canary releases. - -The resource is associated with a *root* service. This is referenced via -`spec.service`. The `spec.service` name is the FQDN that applications will use -to communicate. For any *clients* that are not forwarding their traffic through -a proxy that implements this proposal, the standard Kubernetes service -configuration would continue to operate. - -Implementations will weight outgoing traffic between the services referenced by -`spec.backends`. Each backend is a Kubernetes service that potentially has a -different selector and type. - -#### Specification - -```yaml -apiVersion: v1beta1 -kind: TrafficSplit -metadata: - name: my-weights -spec: - # The root service that clients use to connect to the destination application. - service: numbers - # Services inside the namespace with their own selectors, endpoints and configuration. - backends: - - service: one - # Identical to resources, 1 = 1000m - weight: 10m - - service: two - weight: 100m - - service: three - weight: 1500m -``` - -##### Ports - -Kubernetes services can have multiple ports. This specification does *not* -include ports. Services define these themselves and the duplication becomes -extra overhead for users with the potential of misconfiguration. There are some -edge cases to be aware of. - -There *must* be a match between a port on the *root* service and a port on every -destination backend service. If they do not match, the backend service is not -included and will not receive traffic. - -Mapping between `port` and `targetPort` occurs on each backend service -individually. This allows for new versions of applications to change the ports -they listen on and matches the existing implementation of Services. - -It is recommended that implementations issue an event when the configuration is -incorrect. This mis-configuration can be detected as part of an admission -controller. - -```yaml -kind: Service -apiVersion: v1 -metadata: - name: birds -spec: - selector: - app: birds - ports: - - name: grpc - port: 8080 - - name: rest - port: 9090 ---- -kind: Service -apiVersion: v1 -metadata: - name: blue-birds -spec: - selector: - app: birds - color: blue - ports: - - name: grpc - port: 8080 - - name: rest - port: 9090 ---- -kind: Service -apiVersion: v1 -metadata: - name: green-birds -spec: - selector: - app: birds - color: green - ports: - - name: grpc - port: 8080 - targetPort: 8081 - - name: rest - port: 9090 -``` - -This is a valid configuration. Traffic destined for `birds:8080` will select -between 8080 on either `blue-birds` or `green-birds`. When the eventual -destination of traffic is destined for `green-birds`, the `targetPort` is used -and goes to 8081 on the destination pod. - -Note: traffic destined for `birds:9090` follows the same guidelines and is in -this example to highlight how multiple ports can work. - -```yaml -kind: Service -apiVersion: v1 -metadata: - name: birds -spec: - selector: - app: birds - ports: - - name: grpc - port: 8080 ---- -kind: Service -apiVersion: v1 -metadata: - name: blue-birds -spec: - selector: - app: birds - color: blue - ports: - - name: grpc - port: 1024 ---- -kind: Service -apiVersion: v1 -metadata: - name: green-birds -spec: - selector: - app: birds - color: green - ports: - - name: grpc - port: 8080 -``` - -This is an invalid configuration. Traffic destined for `birds:8080` will only -ever be forwarded to port 8080 on `green-birds`. As the port is 1024 for -`blue-birds`, there is no way for an implementation to know where the traffic is -eventually destined on a port basis. When configuration such as this is -observed, implementations are recommended to issue an event that notifies users -traffic will not be split for `blue-birds` and visible with -`kubectl describe trafficsplit`. - -#### Workflow - -An example workflow, given existing: - -* Deployment named `foobar-v1`, with labels: `app: foobar` and `version: v1`. -* Service named `foobar`, with a selector of `app: foobar`. -* Service named `foobar-v1`, with selectors: `app:foobar` and `version: v1`. -* Clients use the FQDN of `foobar` to communicate. - -For updating an application to a new version: - -* Create a new deployment named `foobar-v2`, with labels: `app: foobar`, - `version: v2`. -* Create a new service named `foobar-v2`, with a selector of: `app: foobar`, - `version: v2`. -* Create a new traffic split named `foobar-rollout`, it will look like: - - ```yaml - apiVersion: v1beta1 - kind: TrafficSplit - metadata: - name: foobar-rollout - spec: - service: foobar - backends: - - service: foobar-v1 - weight: 1 - - service: foobar-v2 - weight: 0m - ``` - - At this point, there is no traffic being sent to `foobar-v2`. - -* Once the deployment is healthy, spot check by sending manual requests to the - `foobar-v2` service. This could be achieved via ingress, port forwarding or - spinning up integration tests from separate pods. -* When ready, increase the weight of `foobar-v2`: - - ```yaml - apiVersion: v1beta1 - kind: TrafficSplit - metadata: - name: foobar-rollout - spec: - service: foobar - backends: - - service: foobar-v1 - weight: 1 - - service: foobar-v2 - weight: 500m - ``` - - At this point, approximately 50% of traffic will be sent to `foobar-v2`. - Note that this is on a per-client basis and not global across all requests - destined for these backends. - -* Verify health metrics and become comfortable with the new version. -* Send all traffic to the new version: - - ```yaml - apiVersion: v1beta1 - kind: TrafficSplit - metadata: - name: foobar-rollout - spec: - service: foobar - backends: - - service: foobar-v2 - weight: 1 - ``` - -* Delete the old `foobar-v1` deployment. -* Delete the old `foobar-v1` service. -* Delete `foobar-rollout` as it is no longer needed. - -#### Tradeoffs - -* Weights vs percentages - the primary reason for weights is in failure - situations. For example, if 50% of traffic is being sent to a service that has - no healthy endpoints - what happens? Weights are simpler to reason about when - the underlying applications are changing. - -* Selectors vs services - it would be possible to have selectors for the - backends at the TrafficSplit level instead of referential services. The - referential services are a little bit more flexible. Users will have a - convenient way to manually test their new versions and implementations will - have the opportunity to rely on Kuberentes native concepts instead of - implementing them independently such as endpoints. - -* TrafficSplits are not hierarchical - it would be possible to have - `spec.backends[0].service` refer to a new split. The implementation would - then be required to resolve this link and reason about weights. By - making splits non-hierarchical, implementations become simpler and loose the - possibility of having circular references. It is still possible to build an - architecture that has nested split definitions, users would need to have a - secondary proxy to manage that. - -* TrafficSplits cannot be self-referential - consider the following definition: - - ```yaml - apiVersion: v1beta1 - kind: TrafficSplit - metadata: - name: my-split - spec: - service: foobar - backends: - - service: foobar-next - weight: 100m - - service: foobar - weight: 900m - ``` - - In this example, 90% of traffic would be sent to the `foobar` service. As - this is a superset that contains multiple versions of an application, it - becomes challenging for users to reason about where traffic is going. - -* Port definitions - this spec uses TrafficSplit to reference services. Services - have already defined ports, mappings via targetPorts and selectors for the - destination pods. For this reason, ports are delegated to Services. There are - some edge cases that arise from this decision. See [ports](#ports) for a more - in-depth discussion. - -#### Open Questions - -* How should this interact with namespaces? One of the downsides to the current - workflow is that deployment names end up changing and require a tool such as - helm or kustomize. By allowing traffic to be split *between* namespaces, it - would be possible to keep names identical and simply clean up namespaces as - new versions come out. - -#### Example implementation - -This example implementation is included to illustrate how the `Canary` object -operates. It is not intended to prescribe a particular implementation. - -Assume a `Canary` object that looks like: - -```yaml - apiVersion: v1beta1 - kind: Canary - metadata: - name: my-canary - spec: - service: web - backends: - - service: web-next - weight: 100m - - service: web-current - weight: 900m -``` - -When a new `Canary` object is created, it instantiates the following Kubernetes -objects: - - * Service who's name is the same as `spec.service` in the Canary (`web`) - * A Deployment running `nginx` which has labels that match the Service - -The nginx layer serves as an HTTP(s) layer which implements the canary. In -particular the nginx config looks like: - -```plain -upstream backend { - server web-next weight=1; - server web-current weight=9; -} -``` - -Thus the new `web` service when accessed from a client in Kubernetes will send -10% of it's traffic to `web-next` and 90% of it's traffic to `web`. - -### Traffic Metrics +## Traffic Metrics This resource provides a common integration point for tools that can benefit by consuming metrics related to HTTP traffic. It follows the pattern of @@ -409,7 +45,7 @@ are two main ways to query the API for metrics: * A sub-resource allows querying for all the edges associated with a specific resource. -#### Specification +## Specification The core resource is `TrafficMetrics`. It references a `resource`, has an `edge` and surfaces latency percentiles and request volume. @@ -446,7 +82,7 @@ metrics: value: 100 ``` -##### Edge +### Edge In this example, the metrics are observed *at* the `foo-775b9cbd88-ntxsl` pod and represent all the traffic *to* the `baz-577db7d977-lsk2q` pod. This @@ -507,7 +143,7 @@ Note: there is no requirement that metrics are actually being collected for resources selected by edges. As metrics are always observed *at* `resource`, it is possible to construct these entirely from the resource. -##### TrafficMetricsList +### TrafficMetricsList There are three different ways to get a TrafficMetricsList: @@ -561,7 +197,7 @@ There are three different ways to get a TrafficMetricsList: Note: this specific list is a sub-resource of `foo-775b9cbd88-ntxsl` from an API perspective. -##### Kubernetes API +### Kubernetes API The `traffic.metrics.k8s.io` API will be exposed via a `APIService`: @@ -623,9 +259,9 @@ The full list of resources for this list would be: For resource types that contain `pods`, such as `namespaces` and `deployments`, the metrics are aggregates of the `pods` contained within. -#### Use Cases +## Use Cases -##### Top +### Top Like `kubectl top`, a plugin could be written such as `kubectl traffic top` that shows the traffic metrics for resources. @@ -642,7 +278,7 @@ Implementation of this command would be a simple conversion of the API's response of a `TrafficMetricsList` into a table for display on the command line or a dashboard. -##### Canary +### Canary In combination with the Canary specification, a controller can: @@ -653,7 +289,7 @@ In combination with the Canary specification, a controller can: * Update the canary definition to route more traffic. * Loop until all traffic is on `v2`. -##### Topologies +### Topologies Following the concept of `kubectl traffic top`, there could also be a `kubectl traffic topology` command. This could provide ascii graphs of the @@ -674,7 +310,7 @@ list of all deployments and another to get the edges for each of those deployments. While this example shows command line usage, it should be possible dashboards such as Kiali entirely on top of this API. -#### RBAC +## RBAC * View metrics for all resources and edges. @@ -704,7 +340,7 @@ possible dashboards such as Kiali entirely on top of this API. verbs: ["*"] ``` -#### Example implementation +## Example implementation This example implementation is included to illustrate how `TrafficMetrics` are surfaced. It does *not* prescribe a particular implementation. This example also @@ -745,7 +381,7 @@ Shim`. 1. On receiving the responses from Prometheus, the shim converts the values into a `TrafficMesh` object for consumption by the end user. -#### Envoy Mesh +## Envoy Mesh ![Envoy Mesh](traffic-metrics-sample/mesh.png) @@ -754,7 +390,7 @@ see that piece of the architecture as well. Prometheus has a scrape config that targets pods with an Envoy sidecar and periodically requests `/stats?format=prometheus`. -#### Tradeoffs +## Tradeoffs * APIService - it would be possible to simply be proscriptive of metrics and label names for Prometheus, configure many of these responses as recording @@ -787,7 +423,7 @@ targets pods with an Envoy sidecar and periodically requests counts. As these are trivial to calculate from success/failure counts and cover up some important data, counts are being used. -#### Out of scope +## Out of scope * Edge aggregation - it would be valuable to get a resource such as a pod and see the edges for other aggregates such as deployments. For now, the queries @@ -801,7 +437,7 @@ targets pods with an Envoy sidecar and periodically requests immediate requirements: how is the canary rollout going? what is my topology? what is happening to my application right now? -#### Open Questions +## Open Questions * stddev - the best integration for canary deployments or things like HPA would be surfacing the stddev of metrics. Then, monitoring could be +/- outside of diff --git a/traffic-policy.md b/traffic-policy.md new file mode 100644 index 0000000..cf541e8 --- /dev/null +++ b/traffic-policy.md @@ -0,0 +1,306 @@ +## Traffic Policy + +This resource allows users to define the authorization policy between +applications and clients. + +There are two high level concepts, `TrafficRole` and `TrafficRoleBinding`. A +`TrafficRole` describes a resource that will be receiving requests and what +traffic can be issued to that resource. As an example, a `TrafficRole` could be +configured to reference a pod and only allow `GET` requests to a specific +endpoint on that service. + +The `TrafficRoleBinding` associates a client's identity with a `TrafficRole`. +This client can then issue requests to the configured resource using the allowed +paths and methods. The design of Kubernetes RBAC has been heavily borrowed from. + +TrafficRoles are designed to work in an additive fashion and all traffic is +denied by default. See [tradeoffs](#policy-tradeoffs) for more details on that +decision. + +## Specification + +### TrafficRole + +A `TrafficRole` contains a list of rules that have: resources, methods and +paths. These work in concert to define what an authorized client can request. + +Resources reference Kubernetes objects. These can be anything such as +deployments, services and pods. The resource concretely defines the resource +that will be serving the traffic itself either as a group (ex. deployment) or a +pod itself. + +The full list of resources that can be referenced is: + +* pods +* replicationscontrollers +* services +* daemonsets +* deployments +* replicasets +* statefulets +* jobs + +Methods describe the HTTP method that is allowed for the referenced resource. +This is limited to existing HTTP methods. It is possible to use `*` to allow any +value instead of enumerating them all. + +Paths describe the HTTP path that a referenced resource serves. The path is a +regex and is anchored (`^`) to the start of the URI. See [use cases] +(#policy-use-cases) for some examples about how these can be generated +automatically without user interaction. + +Bringing everything together, an example specification: + +```yaml +kind: TrafficRole +apiVersion: v1beta1 +metadata: + name: path-specific + namespace: default +rules: +- resources: + - name: foo + kind: Deployment + methods: + - GET + paths: + - '/authors/\d+' +``` + +This specification can be used to grant access to the `foo` deployment for `GET` +requests to paths like `/authors/1234`. An accompanying `TrafficRoleBinding` +would allow authorized clients to request this path. + +Note that `namespace` is *not* part of the resource references. The resources +must be within the same namespace that a `TrafficRole` resides in. See +[tradeoffs](#policy-tradeoffs) for a discussion on why this is. + +While it is convenient to be extremely specific, most policies will be a little +bit more general. The following `TrafficRole` can be used to grant access to the +`foo` service and by association the endpoints that service selects. Any method +and path would be accessible. + +```yaml +kind: TrafficRole +apiVersion: v1beta1 +metadata: + name: service-specific + namespace: default +rules: +- resources: + - name: foo + kind: Service + methods: ["*"] + paths: ["*"] +``` + +As roles are additive, policies that can allow all traffic are helpful for +testing environments or super users. To grant access to anything in a namespace, +`*` can be used in combination with `kind`. The other keys are optional in this +example. + +```yaml +kind: TrafficRole +apiVersion: v1beta1 +metadata: + name: service-specific + namespace: default +rules: +- resources: + - name: "*" + kind: Pod + methods: ["*"] + paths: ["*"] +``` + +### TrafficRoleBinding + +A `TrafficRoleBinding` grants the permissions defined in a `TrafficRole` to a +specified identity. It holds a list of subjects (service accounts, deployments) +and a reference to the role being granted. + +The following binding grants the `path-specific` role to pods contained within +the deployment `bar` in the `default` namespace. Assuming the `path-specific` +definition from above, these pods will be authorized to issue `GET` requests to +the `foo` deployment with paths matching the `/authors/\d+` regex. + +```yaml +kind: TrafficRoleBinding +apiVersion: v1beta1 +metadata: + name: account-specific + namespace: default +subjects: +- kind: Deployment + name: bar + namespace: default +roleRef: + kind: TrafficRole + name: path-specific +``` + +Note: this specification defines that policy is *always* enforced on the +*server* side of a connection. It is up to implementations to decide whether +they would also like to enforce this policy on the *client* side of the +connection as well. + +Bindings reference subjects that can be defined based on identity. This allows +for references to be more than just groups of pods. In this example, a binding +is against a service account. + +```yaml +kind: TrafficRoleBinding +apiVersion: v1beta1 +metadata: + name: account-specific + namespace: default +subjects: +- kind: ServiceAccount + name: foo-account + namespace: default +roleRef: + kind: TrafficRole + name: path-specific +``` + +Implementations and cluster administrators can provide some default policy as +[ClusterTrafficRole](#clustertrafficrole). These can be bound to specific +namespaces and will only be valid for that specific namespace. In this example, +a `ClusterTrafficRole` that grants access to `/health` is bound only inside the +`default` namespace for the pod `foobar`. + +```yaml +kind: TrafficRoleBinding +apiVersion: v1beta1 +metadata: + name: health-check + namespace: default +subjects: +- kind: Pod + name: foobar +roleRef: + kind: ClusterTrafficRole + name: health-check +``` + +### ClusterTrafficRole + +Roles can be defined cluster-wide with `ClusterTrafficRole`. These roles can +grant access to applications in specific namespaces or across all namespaces and +are particularly useful for providing sane default policy. + +The following `ClusterTrafficRole` can be used to grant access to `/health` +endpoints. + +```yaml +kind: ClusterTrafficRole +apiVersion: v1beta1 +metadata: + name: health-check +rules: +- resources: + - name: "*" + kind: Pod + methods: + - GET + paths: + - '/health' +``` + +A `ClusterTrafficRole` can also be used as a default allow policy. + +```yaml +kind: ClusterTrafficRole +apiVersion: v1beta1 +metadata: + name: default-allow +rules: +- resources: + - name: "*" + kind: Pod + methods: ["*"] + paths: ["*"] +``` + +### ClusterTrafficRoleBinding + +The following `ClusterTrafficRoleBinding` will grant access every node in the +cluster to request the `/health` endpoint served by pods in any namespace. + +```yaml +kind: ClusterTrafficRoleBinding +apiVersion: v1beta1 +metadata: + name: default-allow +subjects: +- name: * + kind: Node +roleRef: + kind: ClusterTrafficRole + name: health-check +``` + +To grant the pod `root` running in any namespace access to every pod serving any +endpoint in any namespace, it would be possible to do: + +```yaml +kind: ClusterTrafficRoleBinding +apiVersion: v1beta1 +metadata: + name: default-allow +subjects: +- name: root + kind: Pod +roleRef: + kind: ClusterTrafficRole + name: default-allow +``` + +## Workflow + +## Use Cases {#policy-use-cases} + +### OpenAPI/Swagger + +Many REST applications provide OpenAPI definitions for their endpoints, +operations and authentication methods. Consuming these definitions and +generating policy automatically allows developers and organizations keep the +definition of their policy in a single location. A given specification can be +used to create `TrafficRole` and `TrafficRoleBinding` objects as part of a CI/CD +workflow. + +TODO - specification mapping + +## Tradeoffs {#policy-tradeoffs} + +* Additive policy - policy that denies instead of only allows is valuable + sometimes. Unfortunately, it makes it extremely difficult to reason about what + is allowed or denied in a configuration. +* It would be possible to support `kind: Namespace`. This ends up having some + special casing as to references and doesn't cover all cases. Instead, there + has been a conscious decision to allow `*` kinds instead of supporting + non-namespaced resources (for example, namespaces). This solves the same user + goal and is slightly more flexible. +* Namespaces are explicitly left out of the resource reference (`namespace: + foobar`). This is because the reference could be used to point to a resource + in a different namespace that a user might not have permissions to access or + define.It also ends up being somewhat redundant as `namespace: foobar` is + defined by the location of the resource itself. + +## Out of scope + +* Egress policy - while this specification allows for the possibility of + defining egress configuration, this functionality is currently out of scope. +* Non-HTTP traffic - this specification will need to be expanded to support + traffic such as Kafka or MySQL. + +## Open Questions + +* Why include namespace in the reference *and* role? Is there any reason a user + would create a role in one namespace that references another? +* Namespaces should *not* be possible to define on `TrafficRole` for resource + references but are required for `ClusterTrafficRole`. Is it okay only allow + this key in `ClusterTrafficRole` references? +* I'm not sure `kind: pod` and `name: *` is the best solution for generic allow + policies. Is there a better way to do it? `kind: *` feels wrong as well. + diff --git a/traffic-split.md b/traffic-split.md new file mode 100644 index 0000000..0e57c67 --- /dev/null +++ b/traffic-split.md @@ -0,0 +1,330 @@ +## Traffic Split + +This resource allows users to incrementally direct percentages of traffic +between various services. It will be used by *clients* such as ingress +controllers or service mesh sidecars to split the outgoing traffic to different +destinations. + +Integrations can use this resource to orchestrate canary releases for new +versions of software. The resource itself is not a complete solution as there +must be some kind of controller managing the traffic shifting over time. +Weighting traffic between various services is also more generally useful than +driving canary releases. + +The resource is associated with a *root* service. This is referenced via +`spec.service`. The `spec.service` name is the FQDN that applications will use +to communicate. For any *clients* that are not forwarding their traffic through +a proxy that implements this proposal, the standard Kubernetes service +configuration would continue to operate. + +Implementations will weight outgoing traffic between the services referenced by +`spec.backends`. Each backend is a Kubernetes service that potentially has a +different selector and type. + +## Specification + +```yaml +apiVersion: v1beta1 +kind: TrafficSplit +metadata: + name: my-weights +spec: + # The root service that clients use to connect to the destination application. + service: numbers + # Services inside the namespace with their own selectors, endpoints and configuration. + backends: + - service: one + # Identical to resources, 1 = 1000m + weight: 10m + - service: two + weight: 100m + - service: three + weight: 1500m +``` + +### Ports + +Kubernetes services can have multiple ports. This specification does *not* +include ports. Services define these themselves and the duplication becomes +extra overhead for users with the potential of misconfiguration. There are some +edge cases to be aware of. + +There *must* be a match between a port on the *root* service and a port on every +destination backend service. If they do not match, the backend service is not +included and will not receive traffic. + +Mapping between `port` and `targetPort` occurs on each backend service +individually. This allows for new versions of applications to change the ports +they listen on and matches the existing implementation of Services. + +It is recommended that implementations issue an event when the configuration is +incorrect. This mis-configuration can be detected as part of an admission +controller. + +```yaml +kind: Service +apiVersion: v1 +metadata: + name: birds +spec: + selector: + app: birds + ports: + - name: grpc + port: 8080 + - name: rest + port: 9090 +--- +kind: Service +apiVersion: v1 +metadata: + name: blue-birds +spec: + selector: + app: birds + color: blue + ports: + - name: grpc + port: 8080 + - name: rest + port: 9090 +--- +kind: Service +apiVersion: v1 +metadata: + name: green-birds +spec: + selector: + app: birds + color: green + ports: + - name: grpc + port: 8080 + targetPort: 8081 + - name: rest + port: 9090 +``` + +This is a valid configuration. Traffic destined for `birds:8080` will select +between 8080 on either `blue-birds` or `green-birds`. When the eventual +destination of traffic is destined for `green-birds`, the `targetPort` is used +and goes to 8081 on the destination pod. + +Note: traffic destined for `birds:9090` follows the same guidelines and is in +this example to highlight how multiple ports can work. + +```yaml +kind: Service +apiVersion: v1 +metadata: + name: birds +spec: + selector: + app: birds + ports: + - name: grpc + port: 8080 +--- +kind: Service +apiVersion: v1 +metadata: + name: blue-birds +spec: + selector: + app: birds + color: blue + ports: + - name: grpc + port: 1024 +--- +kind: Service +apiVersion: v1 +metadata: + name: green-birds +spec: + selector: + app: birds + color: green + ports: + - name: grpc + port: 8080 +``` + +This is an invalid configuration. Traffic destined for `birds:8080` will only +ever be forwarded to port 8080 on `green-birds`. As the port is 1024 for +`blue-birds`, there is no way for an implementation to know where the traffic is +eventually destined on a port basis. When configuration such as this is +observed, implementations are recommended to issue an event that notifies users +traffic will not be split for `blue-birds` and visible with +`kubectl describe trafficsplit`. + +## Workflow + +An example workflow, given existing: + +* Deployment named `foobar-v1`, with labels: `app: foobar` and `version: v1`. +* Service named `foobar`, with a selector of `app: foobar`. +* Service named `foobar-v1`, with selectors: `app:foobar` and `version: v1`. +* Clients use the FQDN of `foobar` to communicate. + +For updating an application to a new version: + +* Create a new deployment named `foobar-v2`, with labels: `app: foobar`, + `version: v2`. +* Create a new service named `foobar-v2`, with a selector of: `app: foobar`, + `version: v2`. +* Create a new traffic split named `foobar-rollout`, it will look like: + + ```yaml + apiVersion: v1beta1 + kind: TrafficSplit + metadata: + name: foobar-rollout + spec: + service: foobar + backends: + - service: foobar-v1 + weight: 1 + - service: foobar-v2 + weight: 0m + ``` + + At this point, there is no traffic being sent to `foobar-v2`. + +* Once the deployment is healthy, spot check by sending manual requests to the + `foobar-v2` service. This could be achieved via ingress, port forwarding or + spinning up integration tests from separate pods. +* When ready, increase the weight of `foobar-v2`: + + ```yaml + apiVersion: v1beta1 + kind: TrafficSplit + metadata: + name: foobar-rollout + spec: + service: foobar + backends: + - service: foobar-v1 + weight: 1 + - service: foobar-v2 + weight: 500m + ``` + + At this point, approximately 50% of traffic will be sent to `foobar-v2`. + Note that this is on a per-client basis and not global across all requests + destined for these backends. + +* Verify health metrics and become comfortable with the new version. +* Send all traffic to the new version: + + ```yaml + apiVersion: v1beta1 + kind: TrafficSplit + metadata: + name: foobar-rollout + spec: + service: foobar + backends: + - service: foobar-v2 + weight: 1 + ``` + +* Delete the old `foobar-v1` deployment. +* Delete the old `foobar-v1` service. +* Delete `foobar-rollout` as it is no longer needed. + +## Tradeoffs + +* Weights vs percentages - the primary reason for weights is in failure + situations. For example, if 50% of traffic is being sent to a service that has + no healthy endpoints - what happens? Weights are simpler to reason about when + the underlying applications are changing. + +* Selectors vs services - it would be possible to have selectors for the + backends at the TrafficSplit level instead of referential services. The + referential services are a little bit more flexible. Users will have a + convenient way to manually test their new versions and implementations will + have the opportunity to rely on Kuberentes native concepts instead of + implementing them independently such as endpoints. + +* TrafficSplits are not hierarchical - it would be possible to have + `spec.backends[0].service` refer to a new split. The implementation would + then be required to resolve this link and reason about weights. By + making splits non-hierarchical, implementations become simpler and loose the + possibility of having circular references. It is still possible to build an + architecture that has nested split definitions, users would need to have a + secondary proxy to manage that. + +* TrafficSplits cannot be self-referential - consider the following definition: + + ```yaml + apiVersion: v1beta1 + kind: TrafficSplit + metadata: + name: my-split + spec: + service: foobar + backends: + - service: foobar-next + weight: 100m + - service: foobar + weight: 900m + ``` + + In this example, 90% of traffic would be sent to the `foobar` service. As + this is a superset that contains multiple versions of an application, it + becomes challenging for users to reason about where traffic is going. + +* Port definitions - this spec uses TrafficSplit to reference services. Services + have already defined ports, mappings via targetPorts and selectors for the + destination pods. For this reason, ports are delegated to Services. There are + some edge cases that arise from this decision. See [ports](#ports) for a more + in-depth discussion. + +## Open Questions + +* How should this interact with namespaces? One of the downsides to the current + workflow is that deployment names end up changing and require a tool such as + helm or kustomize. By allowing traffic to be split *between* namespaces, it + would be possible to keep names identical and simply clean up namespaces as + new versions come out. + +## Example implementation + +This example implementation is included to illustrate how the `Canary` object +operates. It is not intended to prescribe a particular implementation. + +Assume a `Canary` object that looks like: + +```yaml + apiVersion: v1beta1 + kind: Canary + metadata: + name: my-canary + spec: + service: web + backends: + - service: web-next + weight: 100m + - service: web-current + weight: 900m +``` + +When a new `Canary` object is created, it instantiates the following Kubernetes +objects: + + * Service who's name is the same as `spec.service` in the Canary (`web`) + * A Deployment running `nginx` which has labels that match the Service + +The nginx layer serves as an HTTP(s) layer which implements the canary. In +particular the nginx config looks like: + +```plain +upstream backend { + server web-next weight=1; + server web-current weight=9; +} +``` + +Thus the new `web` service when accessed from a client in Kubernetes will send +10% of it's traffic to `web-next` and 90% of it's traffic to `web`. From 7bcb33c5e0c9f3be78da5f879315e88cae67c286 Mon Sep 17 00:00:00 2001 From: grampelberg Date: Tue, 30 Apr 2019 18:43:42 -0700 Subject: [PATCH 2/5] Reduce policy back to the specification for now --- traffic-metrics.md | 2 +- traffic-policy.md | 268 +++++++++------------------------------------ 2 files changed, 55 insertions(+), 215 deletions(-) diff --git a/traffic-metrics.md b/traffic-metrics.md index 7d09a20..2d79697 100644 --- a/traffic-metrics.md +++ b/traffic-metrics.md @@ -381,7 +381,7 @@ Shim`. 1. On receiving the responses from Prometheus, the shim converts the values into a `TrafficMesh` object for consumption by the end user. -## Envoy Mesh +### Envoy Mesh ![Envoy Mesh](traffic-metrics-sample/mesh.png) diff --git a/traffic-policy.md b/traffic-policy.md index cf541e8..47cf683 100644 --- a/traffic-policy.md +++ b/traffic-policy.md @@ -3,150 +3,77 @@ This resource allows users to define the authorization policy between applications and clients. -There are two high level concepts, `TrafficRole` and `TrafficRoleBinding`. A -`TrafficRole` describes a resource that will be receiving requests and what -traffic can be issued to that resource. As an example, a `TrafficRole` could be -configured to reference a pod and only allow `GET` requests to a specific -endpoint on that service. - -The `TrafficRoleBinding` associates a client's identity with a `TrafficRole`. -This client can then issue requests to the configured resource using the allowed -paths and methods. The design of Kubernetes RBAC has been heavily borrowed from. - -TrafficRoles are designed to work in an additive fashion and all traffic is -denied by default. See [tradeoffs](#policy-tradeoffs) for more details on that -decision. - ## Specification -### TrafficRole - -A `TrafficRole` contains a list of rules that have: resources, methods and -paths. These work in concert to define what an authorized client can request. - -Resources reference Kubernetes objects. These can be anything such as -deployments, services and pods. The resource concretely defines the resource -that will be serving the traffic itself either as a group (ex. deployment) or a -pod itself. - -The full list of resources that can be referenced is: - -* pods -* replicationscontrollers -* services -* daemonsets -* deployments -* replicasets -* statefulets -* jobs - -Methods describe the HTTP method that is allowed for the referenced resource. -This is limited to existing HTTP methods. It is possible to use `*` to allow any -value instead of enumerating them all. - -Paths describe the HTTP path that a referenced resource serves. The path is a -regex and is anchored (`^`) to the start of the URI. See [use cases] -(#policy-use-cases) for some examples about how these can be generated -automatically without user interaction. - -Bringing everything together, an example specification: +### HTTPService ```yaml -kind: TrafficRole +kind: HTTPService apiVersion: v1beta1 metadata: - name: path-specific + name: foo namespace: default -rules: -- resources: - - name: foo - kind: Deployment +resources: +# v1.ObjectReference +- kind: Service + name: foo +routes: +- name: admin methods: - GET - paths: - - '/authors/\d+' + pathRegex: "/admin/.*" +- name: default + methods: ["*"] + pathRegex: ".*" ``` -This specification can be used to grant access to the `foo` deployment for `GET` -requests to paths like `/authors/1234`. An accompanying `TrafficRoleBinding` -would allow authorized clients to request this path. - -Note that `namespace` is *not* part of the resource references. The resources -must be within the same namespace that a `TrafficRole` resides in. See -[tradeoffs](#policy-tradeoffs) for a discussion on why this is. - -While it is convenient to be extremely specific, most policies will be a little -bit more general. The following `TrafficRole` can be used to grant access to the -`foo` service and by association the endpoints that service selects. Any method -and path would be accessible. +### gRPCService ```yaml -kind: TrafficRole +kind: gRPCService apiVersion: v1beta1 metadata: - name: service-specific + name: foo namespace: default -rules: -- resources: - - name: foo - kind: Service - methods: ["*"] - paths: ["*"] +resources: +- kind: Service + name: foo +package: foo.v1 +service: SearchService +rpc: +- name: Search ``` -As roles are additive, policies that can allow all traffic are helpful for -testing environments or super users. To grant access to anything in a namespace, -`*` can be used in combination with `kind`. The other keys are optional in this -example. +### TCPService ```yaml -kind: TrafficRole +kind: TCPService apiVersion: v1beta1 metadata: - name: service-specific + name: foo namespace: default -rules: -- resources: - - name: "*" - kind: Pod - methods: ["*"] - paths: ["*"] +resources: +- kind: Service + name: foo ``` -### TrafficRoleBinding - -A `TrafficRoleBinding` grants the permissions defined in a `TrafficRole` to a -specified identity. It holds a list of subjects (service accounts, deployments) -and a reference to the role being granted. - -The following binding grants the `path-specific` role to pods contained within -the deployment `bar` in the `default` namespace. Assuming the `path-specific` -definition from above, these pods will be authorized to issue `GET` requests to -the `foo` deployment with paths matching the `/authors/\d+` regex. +### TrafficRole ```yaml -kind: TrafficRoleBinding +kind: TrafficRole apiVersion: v1beta1 metadata: - name: account-specific + name: path-specific namespace: default +resource: + name: foo + kind: Deployment subjects: -- kind: Deployment - name: bar - namespace: default -roleRef: - kind: TrafficRole - name: path-specific +- kind: HTTPService + name: admin ``` -Note: this specification defines that policy is *always* enforced on the -*server* side of a connection. It is up to implementations to decide whether -they would also like to enforce this policy on the *client* side of the -connection as well. - -Bindings reference subjects that can be defined based on identity. This allows -for references to be more than just groups of pods. In this example, a binding -is against a service account. +### TrafficRoleBinding ```yaml kind: TrafficRoleBinding @@ -156,131 +83,42 @@ metadata: namespace: default subjects: - kind: ServiceAccount - name: foo-account + name: bar namespace: default roleRef: kind: TrafficRole name: path-specific ``` -Implementations and cluster administrators can provide some default policy as -[ClusterTrafficRole](#clustertrafficrole). These can be bound to specific -namespaces and will only be valid for that specific namespace. In this example, -a `ClusterTrafficRole` that grants access to `/health` is bound only inside the -`default` namespace for the pod `foobar`. - -```yaml -kind: TrafficRoleBinding -apiVersion: v1beta1 -metadata: - name: health-check - namespace: default -subjects: -- kind: Pod - name: foobar -roleRef: - kind: ClusterTrafficRole - name: health-check -``` - -### ClusterTrafficRole - -Roles can be defined cluster-wide with `ClusterTrafficRole`. These roles can -grant access to applications in specific namespaces or across all namespaces and -are particularly useful for providing sane default policy. - -The following `ClusterTrafficRole` can be used to grant access to `/health` -endpoints. - -```yaml -kind: ClusterTrafficRole -apiVersion: v1beta1 -metadata: - name: health-check -rules: -- resources: - - name: "*" - kind: Pod - methods: - - GET - paths: - - '/health' -``` - -A `ClusterTrafficRole` can also be used as a default allow policy. - -```yaml -kind: ClusterTrafficRole -apiVersion: v1beta1 -metadata: - name: default-allow -rules: -- resources: - - name: "*" - kind: Pod - methods: ["*"] - paths: ["*"] -``` - -### ClusterTrafficRoleBinding - -The following `ClusterTrafficRoleBinding` will grant access every node in the -cluster to request the `/health` endpoint served by pods in any namespace. - -```yaml -kind: ClusterTrafficRoleBinding -apiVersion: v1beta1 -metadata: - name: default-allow -subjects: -- name: * - kind: Node -roleRef: - kind: ClusterTrafficRole - name: health-check -``` - -To grant the pod `root` running in any namespace access to every pod serving any -endpoint in any namespace, it would be possible to do: +Note: this specification defines that policy is *always* enforced on the +*server* side of a connection. It is up to implementations to decide whether +they would also like to enforce this policy on the *client* side of the +connection as well. -```yaml -kind: ClusterTrafficRoleBinding -apiVersion: v1beta1 -metadata: - name: default-allow -subjects: -- name: root - kind: Pod -roleRef: - kind: ClusterTrafficRole - name: default-allow -``` +## Use Cases -## Workflow +## Admission Control -## Use Cases {#policy-use-cases} +TODO ... -### OpenAPI/Swagger +## RBAC -Many REST applications provide OpenAPI definitions for their endpoints, -operations and authentication methods. Consuming these definitions and -generating policy automatically allows developers and organizations keep the -definition of their policy in a single location. A given specification can be -used to create `TrafficRole` and `TrafficRoleBinding` objects as part of a CI/CD -workflow. +TODO ... -TODO - specification mapping +## Example Implementation ## Tradeoffs {#policy-tradeoffs} * Additive policy - policy that denies instead of only allows is valuable sometimes. Unfortunately, it makes it extremely difficult to reason about what is allowed or denied in a configuration. + * It would be possible to support `kind: Namespace`. This ends up having some special casing as to references and doesn't cover all cases. Instead, there has been a conscious decision to allow `*` kinds instead of supporting non-namespaced resources (for example, namespaces). This solves the same user goal and is slightly more flexible. + * Namespaces are explicitly left out of the resource reference (`namespace: foobar`). This is because the reference could be used to point to a resource in a different namespace that a user might not have permissions to access or @@ -291,6 +129,7 @@ TODO - specification mapping * Egress policy - while this specification allows for the possibility of defining egress configuration, this functionality is currently out of scope. + * Non-HTTP traffic - this specification will need to be expanded to support traffic such as Kafka or MySQL. @@ -298,9 +137,10 @@ TODO - specification mapping * Why include namespace in the reference *and* role? Is there any reason a user would create a role in one namespace that references another? + * Namespaces should *not* be possible to define on `TrafficRole` for resource references but are required for `ClusterTrafficRole`. Is it okay only allow this key in `ClusterTrafficRole` references? + * I'm not sure `kind: pod` and `name: *` is the best solution for generic allow policies. Is there a better way to do it? `kind: *` feels wrong as well. - From b9d385d194f454fa780a786e099d4f7182f80fc1 Mon Sep 17 00:00:00 2001 From: grampelberg Date: Wed, 1 May 2019 18:02:24 -0700 Subject: [PATCH 3/5] Add specs definition --- ...fic-policy.md => traffic-access-control.md | 107 ++++++++---------- traffic-specs.md | 89 +++++++++++++++ 2 files changed, 134 insertions(+), 62 deletions(-) rename traffic-policy.md => traffic-access-control.md (61%) create mode 100644 traffic-specs.md diff --git a/traffic-policy.md b/traffic-access-control.md similarity index 61% rename from traffic-policy.md rename to traffic-access-control.md index 47cf683..d7bee86 100644 --- a/traffic-policy.md +++ b/traffic-access-control.md @@ -1,62 +1,10 @@ -## Traffic Policy +## Traffic Access Control -This resource allows users to define the authorization policy between -applications and clients. +This resource allows users to define policies that control access to resources +for clients. ## Specification -### HTTPService - -```yaml -kind: HTTPService -apiVersion: v1beta1 -metadata: - name: foo - namespace: default -resources: -# v1.ObjectReference -- kind: Service - name: foo -routes: -- name: admin - methods: - - GET - pathRegex: "/admin/.*" -- name: default - methods: ["*"] - pathRegex: ".*" -``` - -### gRPCService - -```yaml -kind: gRPCService -apiVersion: v1beta1 -metadata: - name: foo - namespace: default -resources: -- kind: Service - name: foo -package: foo.v1 -service: SearchService -rpc: -- name: Search -``` - -### TCPService - -```yaml -kind: TCPService -apiVersion: v1beta1 -metadata: - name: foo - namespace: default -resources: -- kind: Service - name: foo -``` - ### TrafficRole ```yaml @@ -68,11 +16,20 @@ metadata: resource: name: foo kind: Deployment -subjects: -- kind: HTTPService - name: admin +port: 8080 +rules: +- kind: HTTPRoutes + name: the-routes + specs: + - metrics ``` +This example associates a set of routes with a set of pods. It will match the +routes arriving at these pods on the specified port (8080). While `the-routes` +definition contains multiple elements, only a single element is referenced in +this role. This example could be used in conjunction with a TrafficRoleBinding +to provide Prometheus the access to scrape metrics on the `foo` deployment. + ### TrafficRoleBinding ```yaml @@ -90,10 +47,36 @@ roleRef: name: path-specific ``` -Note: this specification defines that policy is *always* enforced on the +This example grants the ability to access the routes in `path-specific` to any +client providing the identity `bar` based on a ServiceAccount. + +As access control is additive, it is important to provide definitions that allow +non-authenticated traffic access. Imagine rolling a service mesh out +incrementally. It is important to not immediately block any traffic that is not +from an authenticated client. In this world, groups are important as a source of +identity. + +```yaml +kind: TrafficRoleBinding +apiVersion: v1beta1 +metadata: + name: account-specific + namespace: default +subjects: +- kind: Group + name: system:unauthenticated +roleRef: + kind: TrafficRole + name: path-specific +``` + +This example allows any unauthenticated client access to the rules defined in +`path-specific`. + +Note: this specification defines that access control is *always* enforced on the *server* side of a connection. It is up to implementations to decide whether -they would also like to enforce this policy on the *client* side of the -connection as well. +they would also like to enforce access control on the *client* side +of the connection as well. ## Use Cases @@ -107,7 +90,7 @@ TODO ... ## Example Implementation -## Tradeoffs {#policy-tradeoffs} +## Tradeoffs * Additive policy - policy that denies instead of only allows is valuable sometimes. Unfortunately, it makes it extremely difficult to reason about what diff --git a/traffic-specs.md b/traffic-specs.md new file mode 100644 index 0000000..6e5b2e5 --- /dev/null +++ b/traffic-specs.md @@ -0,0 +1,89 @@ +## Traffic Spec + +This resource allows users to specify how their traffic looks. It is used in +concert with authorization and other policies to concretely define what should +happen to specific types of traffic as it flows through the mesh. + +There are many different protocols that users would like to have be part of a +mesh. Right now, this is primarily HTTP, but it is possible to imagine a world +where service meshes are aware of other protocols. Each resource in this +specification is meant to match 1:1 with a specific protocol. This allows users +to define the traffic in a protocol specific fashion. + +## Specification + +### HTTPRoutes + +This resource is used to describe HTTP/1 and HTTP/2 traffic. It enumerates the +routes that can be served by an application. + +```yaml +apiVersion: v1beta1 +kind: HTTPRoutes +metadata: + name: the-routes + labels: + app: foobar + class: admin +routes: +- name: metrics + pathRegex: "/metrics" + methods: + - GET +- name: health + pathRegex: "/ping" + methods: ["*"] +``` + +This example defines two routes, `metrics` and `health`. The name is the primary +key and all fields are required. A regex is used to match against the URI and is +anchored (`^`) to the beginning of the URI. Methods can either be specific +(`GET`) or `*` to match all methods. + +These routes have not yet been associated with any resources. See +[access control](traffic-access-control.md) for an example of how routes become +associated with applications serving traffic. + +In this example, there are labels. These are used to allow flexible binding. As +routes can be thought of as a bucket that defines traffic, it is valuable to +have different classifications and applications. Imagine an access control +binding across `class: admin` for specific clients such as Prometheus or +liveness and readiness probes. + +Another example defines an unauthenticated catch-all and a set of specific +routes that are sensitive and should have access controlled. + +```yaml +apiVersion: v1beta1 +kind: HTTPRoutes +metadata: + name: external-routes + labels: + app: foobar +routes: +- name: admin + pathRegex: "/admin/.*" + methods: ["*"] +- name: unauthenticated + pathRegex: "/.*" + methods: ["*"] +``` + +## Automatic Generation + +While it is possible for users to create these by hand, the recommended pattern +is for tools to do it for the users. OpenAPI specifications can be consumed to +generate the list of routes. gRPC protobufs can similarly be used to +automatically generate the list of routes from code. + +## Tradeoffs + +* These specifications are *not* directly associated with applications and other + resources. They're used to describe the type of traffic flowing through a mesh + and used by higher level policies such as access control or rate limiting. The + policies themselves bind these routes to the applications serving traffic. + +## Out of scope + +* gRPC - there should be a gRPC specific traffic spec. As part of the first + version, this has been left out as HTTPRoutes can be used in the interim. From 7db97f3dc96676dd8184b53893981f308efad4bc Mon Sep 17 00:00:00 2001 From: grampelberg Date: Thu, 2 May 2019 13:15:46 -0700 Subject: [PATCH 4/5] Update links to access control and traffic specs --- README.md | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 0bb1d40..f2204b0 100644 --- a/README.md +++ b/README.md @@ -6,11 +6,14 @@ of providers. This allows for both standardization for end-users and innovation by providers of Service Mesh Technology. It enables flexibility and interoperability. -This specification consists of three APIs: - -* [Traffic Policy](traffic-policy.md) - configure access to specific pods and - routes based on the identity of a client for locking down applications to only - allowed users and services. +This specification consists of multiple APIs: + +* [Traffic Specs](traffic-specs.md) - define how traffic looks on a per-protocol + basis. These resources work in concert with access control and other types of + policy to manage traffic at a protocol level. +* [Traffic Access Control](traffic-policy.md) - configure access to specific + pods and routes based on the identity of a client for locking down + applications to only allowed users and services. * [Traffic Split](traffic-split.md) - incrementally direct percentages of traffic between various services to assist in building out canary rollouts. * [Traffic Metrics](traffic-metrics.md) - expose common traffic metrics for use @@ -26,9 +29,9 @@ See the individual documents for the details. Each document outlines: ### Technical Overview The SMI is specified as a collection of Kubernetes Custom Resource Definitions -(CRD) and Extension API Servers. These APIs (details below) can be installed -onto any Kubernetes cluster and manipulated using standard tools. The APIs -require an SMI provider to do something. +(CRD) and Extension API Servers. These APIs can be installed onto any Kubernetes +cluster and manipulated using standard tools. The APIs require an SMI provider +to do something. To activate these APIs an SMI provider is run in the Kubernetes cluster. For the resources that enable configuration, the SMI provider reflects back on their From a92506d37a4947cea57182aabcd5e37f45734a27 Mon Sep 17 00:00:00 2001 From: grampelberg Date: Thu, 2 May 2019 13:23:28 -0700 Subject: [PATCH 5/5] Remove label example --- README.md | 2 +- traffic-specs.md | 28 ---------------------------- 2 files changed, 1 insertion(+), 29 deletions(-) diff --git a/README.md b/README.md index f2204b0..8f4ecf5 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -## Service Mesh Interface (code name Smeagol) +## Service Mesh Interface The Service Mesh Interface (SMI) is a specification for service meshes that run on Kubernetes. It defines a common standard that can be implemented by a variety diff --git a/traffic-specs.md b/traffic-specs.md index 6e5b2e5..7553ac1 100644 --- a/traffic-specs.md +++ b/traffic-specs.md @@ -22,9 +22,6 @@ apiVersion: v1beta1 kind: HTTPRoutes metadata: name: the-routes - labels: - app: foobar - class: admin routes: - name: metrics pathRegex: "/metrics" @@ -44,31 +41,6 @@ These routes have not yet been associated with any resources. See [access control](traffic-access-control.md) for an example of how routes become associated with applications serving traffic. -In this example, there are labels. These are used to allow flexible binding. As -routes can be thought of as a bucket that defines traffic, it is valuable to -have different classifications and applications. Imagine an access control -binding across `class: admin` for specific clients such as Prometheus or -liveness and readiness probes. - -Another example defines an unauthenticated catch-all and a set of specific -routes that are sensitive and should have access controlled. - -```yaml -apiVersion: v1beta1 -kind: HTTPRoutes -metadata: - name: external-routes - labels: - app: foobar -routes: -- name: admin - pathRegex: "/admin/.*" - methods: ["*"] -- name: unauthenticated - pathRegex: "/.*" - methods: ["*"] -``` - ## Automatic Generation While it is possible for users to create these by hand, the recommended pattern