Skip to content

Releases: caktus/ansible-role-k8s-web-cluster

v1.7.0

05 Dec 15:41
1218032
Compare
Choose a tag to compare
  • Use boolean type for installCRDs cert-manager value to support v1.15.0+. Eventually, this value should be migrated to crds.keep: true and crds.enabled: true.

v1.6.0

08 May 16:58
0b23119
Compare
Choose a tag to compare

v1.5.0

25 Apr 12:39
d5a0d31
Compare
Choose a tag to compare
  • Requires Ansible v6.0+
  • Switch to kubernetes.core for Ansible 6.x+ support. The community.kubernetes collection was renamed to kubernetes.core in v2.0.0 of the kubernetes.core collection. Since Ansible v3.0.0, both the kubernetes.core and community.kubernetes namespaced collections were included for convenience. Ansible v6.0.0 removed the community.kubernetes convenience package.
  • Use fully qualified collection names (FQCNs) to be explicit
  • Add k8s_cert_manager_release_values variable to allow per-project customization of Helm chart values

v1.4.0

08 Feb 17:22
Compare
Choose a tag to compare
  • Add optional Descheduler for Kubernetes support. Enable with k8s_install_descheduler and reference defaults/main.yml for configuration options.

v1.3.0

20 Apr 15:10
60eeaca
Compare
Choose a tag to compare
  • Set nginx podAntiAffinity to default to be a nice default that prefers (but does not require) scheduling pods on different nodes. Override with k8s_ingress_nginx_affinity.
  • Allow configuration of nginx service loadBalancerIP via k8s_ingress_nginx_load_balancer_ip.
  • Add extendable k8s_cert_manager_solvers variable to support configuring DNS01 challenge provider
  • Update api version for echotest ingress

v1.2.0

06 Dec 19:32
4d33d73
Compare
Choose a tag to compare
  • Default to 2 replicas for ingress controller
  • Fix typo in k8s_digitalocean_load_balancer_hostname

v1.1.0

12 Mar 14:04
51e8cca
Compare
Choose a tag to compare

Move CI user creation to caktus.django-k8s role since it is something that is project or environment-specific.

v1.0.0

18 Feb 13:54
Compare
Choose a tag to compare

BACKWARDS INCOMPATIBLE CHANGES:

  • Use Helm to install ingress-nginx
    • ingress-nginx controller upgraded from 0.26.1 to 0.44.0 (via 3.23.0 helm chart release)
  • Use Helm to install cert-manager
    • cert-manager controller upgraded from v0.10.1 to v1.2.0
    • The accompanying caktus.django-k8s role must also be updated to >v0.0.11 to restore certificate validation.
  • You must follow the Digital Ocean instructions and set a hostname via k8s_digitalocean_loadbalancer_hostname to keep PROXY protocol enabled on Digital Ocean (required to see real client IP addresses).

Upgrade instructions:

  1. First, purge the old cert-manager and create a new ingress controller in a new namespace:

    # Install new ingress controller, but don't delete the old one yet
    k8s_install_ingress_controller: yes
    k8s_ingress_nginx_namespace: ingress-nginx-temp
    k8s_purge_ingress_controller: no
    
    # Don't install a new cert-manager, but do delete the old one
    k8s_install_cert_manager: no
    k8s_purge_cert_manager: yes
    

    If you don't wish to make two DNS changes, you may find it helpful to set
    k8s_ingress_nginx_namespace to a more permanent name.

    $ ansible-playbook -l <host/group> deploy.yaml -vv
  2. Look up the IP or hostname for the new ingress controller:

    $ kubectl -n ingress-nginx-temp get svc
  3. Change the DNS for all domains that point to this cluster to use the new IP or hostname.
    You may find it helpful to watch the logs of both ingress controllers during this time to
    see the traffic switch to the new ingress controller.

    The post Kubernetes: Nginx and Zero Downtime in Production has a more detailed overview
    of this approach.

  4. Next, add k8s_purge_ingress_controller: yes to your variables file and re-run deploy.yaml.
    Note that you will now have both k8s_install_ingress_controller: yes and
    k8s_purge_ingress_controller: yes, however, the former refers to the new namespace and the
    latter refers only to the old namespace. This should clear out the old ingress controller.

    Note, you may need to run this a few times if Ansible times out attempting to delete everything
    the first time.

  5. If you want to switch everything to use the original ingress-nginx namespace again, make the
    change in your variables file and re-run deploy.yaml with your final configuration.

    Otherwise, simply set k8s_install_cert_manager: yes and do not change the namespace.

    # your variables file (e.g., group_vars/all.yaml)
    k8s_install_ingress_controller: yes
    k8s_ingress_nginx_namespace: ingress-nginx
    
    k8s_install_cert_manager: yes
    

    Make sure to remove the two k8s_purge_* variables as they are no longer needed and will
    be removed in a future release.

  6. If you elected to switch namespaces again:

    • Change the DNS to the new service address as in step 3 and wait for traffic to stop
      going to the temporary ingress controller.

    • Remove the ingress-nginx-temp namespace as follows:

      helm -n ingress-nginx-temp uninstall ingress-nginx
      kubectl delete ns ingress-nginx-temp
      
  7. Test that cert-manager is working properly by deploying a new echotest pod as described in the README.

  8. Update any projects that deploy to the cluster to use the corresponding 1.0 release of
    ansible-role-django-k8s.

Please note that the k8s_purge_* variables are intended only for removing the previously-installed
versions of these resources.
If you need to remove the newly installed cert-manager or ingress-nginx
for any reason, you should use the helm uninstall method described above.

Other Changes:

  • Move Papertrail and New Relic to
    caktus.k8s-hosting-services.
    The existing deployments will not be automatically removed, but they are no
    longer managed from this role. To take advantage of future changes to those
    deployments, add the caktus.k8s-hosting-services role to your
    requirements.yaml file
  • Retire k8s_cluster_name variable.

v0.0.7

06 Jul 13:00
Compare
Choose a tag to compare
  • Allow Papertrail memory resources to be configurable

v0.0.6

02 Jul 18:29
Compare
Choose a tag to compare
  • Support creation of an AWS IAM user with limited perms that can be used on CI to push
    images and deploy.