-
Notifications
You must be signed in to change notification settings - Fork 0
Kubernetes Implementation
Maintainer: @lwander
Note - this is meant to serve as a technical guide to the Kubernetes implementation. A more general walkthrough can (soon) be found on spinnaker.io.
The provider specification is as follows:
kubernetes:
enabled: # boolean indicating whether or not to use kubernetes as a provider
accounts: # list of kubernetes accounts
- name: # required unique name for this account
kubeconfigFile: # optional location of the kube config file
namespaces: # optional list of namespace to manage
user: # optional user to authenticate as that must exist in the provided kube config file
cluster: # optional cluster to connect to that must exist in the provdied kube config file
dockerRegistries: # required (at least 1) docker registry accounts used as a source of images
- accountName: # required name of the docker registry account
namespaces: # optional list of namespaces this docker registry can deploy to
Authentication is handled by the Clouddriver microservice, and was introduced by in #214, and refined in clouddriver/pull#335.
The Kubernetes provider authenticates with any valid Kubernetes cluster using details found in a provided kubeconfig file. By default, the kubeconfig file at ~/.kube/config
is used, unless the field kubeconfigFile
is specified. The user, cluster, and singleton namespace are derived from the current-context
field in the kubeconfig file, unless their respective fields are provided. If no namespace is found in either namespaces
or in the current-context
field of the kubeconfig file, then the value ["default"]
is used. Any namespaces that do not exist will be created.
The Docker Registry accounts referred to by the above configuration are also configured inside Clouddriver. The details of that implementation can be found here. The Docker authentication details (username, password, email, endpoint address), are read from each listed Docker Registry account, and configured as an image pull secret, implemented in clouddriver/pull#285. The namespaces
field of the dockerRegistry
subblock defaults to the full list of namespaces, and is used by the Kubernetes provider to determine which namespaces to register the image pull secrets with. Every created pod is given the full list of image pull secrets available to its containing namespace.
The Kubernetes provider will periodically (every 30 seconds) attempt to fetch every provided namespace to see if the cluster is still reachable.
Spinnaker Server Groups are Kubernetes Replication Controllers. This is a straightforward mapping since both represent sets of managed, identical, immutable computing resources. However, there are a few caveats:
-
Replication Controllers manage Pods, which unlike VMs, can house multiple container images with the promise that all images in a Pod will be collocated. Notice, the intent here is not to place all of your application's containers into a single pod, but to instead collocate containers that form a logical unit and benefit from sharing resources. Design patterns, and a more thorough explanation can be found here.
-
Each Pod is in charge of managing it's own health checks, as opposed to the typical Spinnaker pattern of having health checks performed by Load Balancers. The ability to add these to replication controllers was added in clouddriver/pull#359.
Below are the server group operations and their implementations.
-
Clouddriver component: clouddriver/pull#227.
-
Deck components:
- Ad-hoc creation deck/pull#1881.
- Pipeline deploy stage: deck/pull#2015.
This operation creates a Replication Controller with the specified containers and their respective configurations.
-
Clouddriver component: clouddriver/pull#245.
-
Deck component: deck/pull#1950.
This operation takes a source Replication Controller as an argument, and creates it while overriding any attributes with the values provided in the request.