Authors: Simone Aquilini (s5667729) - Luca Ferrari (s4784573) - Lorenzo La Corte (s4784539)
- update ansible_ssh_private_key_file for both nodes in inventory.yml
- create my_secrets/secrets.yml with your secrets
- an example of the required format is in example/secrets.yml
- for testing purposes, just copy it under secrets.yml
-
ansible-lint check gives only a warning related to the use of shell
- the shell module is used only once
- it's necessary in order to update the CA trusted by our system
-
some tasks are necessary were necessary in production
- nextcloud and keycloak image is erased before re-built, in order to forcely update the entrypoint
- since our docker version apparently leaves ports open also when the container is down, we have introduced a retry condition in the registry-login task, in order to make it work also when re-running the project
- created machines VMs are named VM1 and VM2, hard disk is 20GB and memory is 4096MB We had to set VM1's HD storage to 30GB to solve some memory related issues.
- configured ssh
- change hostnames: node1, node2
- create 2 network adapters:
- 192.168.50.0/24 for the external net (VMNet8 - adapter 1): Internet access via NAT and reachable by the host (it's the gateway)
- 10.255.255.0/24 for the storage net (VMNet3 - adapter 2): No internet access and non reachable by the host
- configure static IP:
- x.x.x.10 for VM1
- x.x.x.20 for VM2
- configure names: update the hosts file of the host machine (c:\Windows\System32\Drivers\etc\hosts)
- 192.168.50.10 VM1
- 192.168.50.20 VM2
- set up inventory.yml file with:
- NFS configuration for 2 nodes: nfs-client and nfs-server
- set up of ssh key on both nodes
- create and configure the 2 roles: nfs-client and nfs-server
- change playbook.yml adding host nfs-server and nfs-client
- change requirements.yml
- change makefile
- change netplan configuring storage network
- 10.255.255.10 for node1
- 10.255.255.20 for node2
-
ssh-keygen --> it generates keys in /home/node1/.ssh
-
from the folder /home/node1/.ssh
-
ssh-copy-id [email protected]
-
ssh [email protected] # should not request pwd
-
ssh-copy-id [email protected]
-
ssh [email protected] # should not request pwd
-
-
then, in inventory we can set: ansible_ssh_private_key_file: /home/node1/.ssh/id_rsa
-
in node1:
- sudo nano /etc/sudoers.d/devops # insert a line
- node1 ALL=(ALL) NOPASSWD: ALL
-
in node2:
- sudo nano /etc/sudoers.d/devops # insert a line
- node2 ALL=(ALL) NOPASSWD: ALL
-
4 tasks:
- Allow apt to use a repository over HTTPS (ansible.builtin.apt)
- Download the GPG signature key for the repository (ansible.builtin.apt_key)
- Add the add Docker repository to the distribution (ansible.builtin.apt_repository)
- Update the apt cache for the available packages and install Docker (ansible.builtin.apt)
-
Notes:
- apt-get update --> "update_cache: true" in ansible.builtin.apt
- ansible_distribution and ansible_architecture are magic variables
- to use them in links we have to lowercase them -> | lower
- Init a new swarm with default parameters
- join to the swarm using hostvars for addresses and token
- create 3 roles:
- it deploys the registry on the server/manager
- enabling TLS using certificates under the folder /data/docker-registry/certs
- setting up authentication through htpasswd
- using bcrypt as scheme (the only one supported)
- generating username and password each time randomly
- it instructs all single docker hosts to trust TLS certificate copying ca cert inside /etc/docker/certs.d/10.255.255.10:5000
- each node logs in leveraging docker_login module
- using TLS certificates (insecure registry solved)
- using randomly generated username and password
- deploy one and shared postgres instance through docker compose
- database role: update the entrypoint in order to call sql scripts DBs keycloak and nextcloud
- deploy the service on compose
- --import-realm is used to import a config .json file automatically from the default folder
- some other flags are used to enable logging and rev-proxy integration
- the .json file is used to easily configure realm, clients and users
- keycloak entrypoint modification in order to wait for postgres
- a custom image is build starting from the one given:
- entrypoint is a new one, which aim is to call the default one and then apply the required modifications for the integration
- in particular, our entrypoint has to do its commands with sudo in order to has www-data permissions and -E set to preserve their existing environment variables
- extra-hosts are set in order to enable the oidc auto-redirection
- all container are attached to the same overlay network
- traefik config through commands in the compose, ports 80 and 443 are exposed
- all services have the necessary labels for configuring websecure redirection
- compose is used to deploy the service on both nodes and mount config files
- a configuration file is set to:
- collect logs from each container and send them to loki
- collect metrics with node_exporter and expose them through a http endpoint
- the config is templated by each node in order to label correctly each log with the source node
- is deployed through compose with default config and simply stores logs
- deploy cadvisor on the 2 machines
- map volumes in order to make it collect metrics of containers
- is deployed through compose with flag that set retention time and config file
- its config file exploits docker socket to collect metrics of cadvisor
- datasources are configured (loki and prometheus)
- openid is enabled in order to login with keycloak
- dashboards are implemented:
- metrics dashboard uses all the data coming from prometheus exporter
- logs dashboard uses all the data coming from tail and systemd plugins
- containers dashboard uses all the data coming from cadvisor
- templated secrets using jinja2 source files and template module in ansible
make run-ansible
runs the Ansible playbookmake run-ansible-lint
runs the Ansible playbook linter- An example inside of
examples
of the integration between Nextcloud and Keycloak