Releases: caktus/ansible-role-k8s-web-cluster
v1.7.0
v1.6.0
- Set
allowSnippetAnnotations: true
to allow user snippets (see Disable user snippets per default).
v1.5.0
- Requires Ansible v6.0+
- Switch to
kubernetes.core
for Ansible 6.x+ support. Thecommunity.kubernetes
collection was renamed tokubernetes.core
in v2.0.0 of the kubernetes.core collection. Since Ansible v3.0.0, both thekubernetes.core
andcommunity.kubernetes
namespaced collections were included for convenience. Ansible v6.0.0 removed thecommunity.kubernetes
convenience package. - Use fully qualified collection names (FQCNs) to be explicit
- Add
k8s_cert_manager_release_values
variable to allow per-project customization of Helm chart values
v1.4.0
- Add optional Descheduler for Kubernetes support. Enable with
k8s_install_descheduler
and referencedefaults/main.yml
for configuration options.
v1.3.0
- Set nginx
podAntiAffinity
to default to be a nice default that prefers (but does not require) scheduling pods on different nodes. Override withk8s_ingress_nginx_affinity
. - Allow configuration of nginx service
loadBalancerIP
viak8s_ingress_nginx_load_balancer_ip
. - Add extendable
k8s_cert_manager_solvers
variable to support configuring DNS01 challenge provider - Update api version for echotest ingress
v1.2.0
- Default to 2 replicas for ingress controller
- Fix typo in
k8s_digitalocean_load_balancer_hostname
v1.1.0
Move CI user creation to caktus.django-k8s role since it is something that is project or environment-specific.
v1.0.0
BACKWARDS INCOMPATIBLE CHANGES:
- Use Helm to install
ingress-nginx
ingress-nginx
controller upgraded from0.26.1
to0.44.0
(via3.23.0
helm chart release)
- Use Helm to install
cert-manager
cert-manager
controller upgraded fromv0.10.1
tov1.2.0
- The accompanying caktus.django-k8s role must also be updated to >
v0.0.11
to restore certificate validation.
- You must follow the Digital Ocean instructions and set a hostname via
k8s_digitalocean_loadbalancer_hostname
to keep PROXY protocol enabled on Digital Ocean (required to see real client IP addresses).
Upgrade instructions:
-
First, purge the old cert-manager and create a new ingress controller in a new namespace:
# Install new ingress controller, but don't delete the old one yet k8s_install_ingress_controller: yes k8s_ingress_nginx_namespace: ingress-nginx-temp k8s_purge_ingress_controller: no # Don't install a new cert-manager, but do delete the old one k8s_install_cert_manager: no k8s_purge_cert_manager: yes
If you don't wish to make two DNS changes, you may find it helpful to set
k8s_ingress_nginx_namespace
to a more permanent name.$ ansible-playbook -l <host/group> deploy.yaml -vv
-
Look up the IP or hostname for the new ingress controller:
$ kubectl -n ingress-nginx-temp get svc
-
Change the DNS for all domains that point to this cluster to use the new IP or hostname.
You may find it helpful to watch the logs of both ingress controllers during this time to
see the traffic switch to the new ingress controller.The post Kubernetes: Nginx and Zero Downtime in Production has a more detailed overview
of this approach. -
Next, add
k8s_purge_ingress_controller: yes
to your variables file and re-rundeploy.yaml
.
Note that you will now have bothk8s_install_ingress_controller: yes
and
k8s_purge_ingress_controller: yes
, however, the former refers to the new namespace and the
latter refers only to the old namespace. This should clear out the old ingress controller.Note, you may need to run this a few times if Ansible times out attempting to delete everything
the first time. -
If you want to switch everything to use the original
ingress-nginx
namespace again, make the
change in your variables file and re-rundeploy.yaml
with your final configuration.Otherwise, simply set
k8s_install_cert_manager: yes
and do not change the namespace.# your variables file (e.g., group_vars/all.yaml) k8s_install_ingress_controller: yes k8s_ingress_nginx_namespace: ingress-nginx k8s_install_cert_manager: yes
Make sure to remove the two
k8s_purge_*
variables as they are no longer needed and will
be removed in a future release. -
If you elected to switch namespaces again:
-
Change the DNS to the new service address as in step 3 and wait for traffic to stop
going to the temporary ingress controller. -
Remove the
ingress-nginx-temp
namespace as follows:helm -n ingress-nginx-temp uninstall ingress-nginx kubectl delete ns ingress-nginx-temp
-
-
Test that cert-manager is working properly by deploying a new echotest pod as described in the README.
-
Update any projects that deploy to the cluster to use the corresponding 1.0 release of
ansible-role-django-k8s.
Please note that the k8s_purge_*
variables are intended only for removing the previously-installed
versions of these resources. If you need to remove the newly installed cert-manager or ingress-nginx
for any reason, you should use the helm uninstall
method described above.
Other Changes:
- Move Papertrail and New Relic to
caktus.k8s-hosting-services.
The existing deployments will not be automatically removed, but they are no
longer managed from this role. To take advantage of future changes to those
deployments, add thecaktus.k8s-hosting-services
role to your
requirements.yaml file - Retire
k8s_cluster_name
variable.