Kubernetes cronjob to make periodical Cloudflare backups in Terraform format
This project builds atop of cf-terraforming, leveraging all it's features.
What it contains:
- Backup artifacts generated recurrently in an automated manner
- Automation Python script
- Dockerfile for generating the Docker container
- Helm Chart for deploying the cronjob in a Kubernetes cluster
This project uses:
- Terraform 1.8.5
- Golang 1.22.4
- Python3 (last version at the image build date)
- cf-terraforming -dev+79ac2e6f4d66 (last version at the image build date)
This project allows notifications!
Use the available hook to send messages to Google Chats channel or customize it for your preferred tool!
Customize the Gchats channel link in values.yaml
.
Generated files:
The following structure are generated for the current resources:
.
└── 2024
└── 7
└── 3
├── DOMAIN1.COM
├── DOMAIN2.COM
├── DOMAIN3.COM
└── ...
The resources are saved this way:
└── DOMAIN1.COM
├── cloudflare_access_group.tf
├── cloudflare_access_identity_provider.tf
├── cloudflare_access_mutual_tls_certificate.tf
├── cloudflare_access_rule.tf
├── cloudflare_account_member.tf
├── cloudflare_argo.tf
├── cloudflare_bot_management.tf
├── cloudflare_load_balancer_monitor.tf
├── cloudflare_load_balancer_pool.tf
├── cloudflare_record.tf
├── cloudflare_tiered_cache.tf
├── cloudflare_url_normalization_settings.tf
├── cloudflare_zone_settings_override.tf
├── cloudflare_zone.tf
└── ...
dir structure
├── cloudcron-script.py
├── config.tf
├── Dockerfile
├── generated
│ └── 2024 ...
├── k8s
│ └── Cloudcron-app
├── requirements.txt
└── ssh
├── id_ed25519
└── id_ed25519.pub
cloudcron
- cloudcron-script.py - Python code responsible for communicating with Cloudflare and Terraform tool
- config.tf - Terraform Provider configuration for Cloudflare (do not modify)
- Dockerfile - Dockerfile for generating the Docker container image
- generated/ - resource terraform files, generated in the backup
- k8s/ - Helm template for provisioning in Kubernetes
- requirements.txt - Python requirements used at container build time
- ssh/ - credentials to connect to Github, used by the application
Artifact | Type | Used in |
---|---|---|
account_resources.tsv | Passes the list of Terraform resources that are in the scope of Account |
config-mirroring.py |
cf_resources.tsv | Resources that are processed in the query to generate configurations, of the type Zone and Account |
config-mirroring.py |
zoneids.tsv | List of Zone IDs and Account IDs that the scanner will inspect | config-mirroring.py |
ssh/id_ed25519 | SSH private key used by the account that is Admin of the Github repository | Dockerfile |
ssh/id_ed25519.pub | SSH public key used by the account that is Admin of the Github repository | Dockerfile |
YOUR-SA-CREDENTIALS.json |
Service Account used by automation, to connect to Cloud Object Storage | config-mirroring.py |
API_TOKEN | Environment variable containing the Cloudflare API Token used | k8s/cloudcron-app/values.yaml |
GCHAT_NOTIFICATION | Webhook used for notification in Google Chats Space | k8s/cloudcron-app/values.yaml |
Important
We are aware that providing ssh credentials is not the most noble of things, but we are trading usefulness for security, considering you are aware of risks and running in a self-contained environment
It will hardly be necessary to update the code, unless changes arise in the structure of the cf-terraforming
project.
To configure the project on a new cluster, for example, you will need:
- Make sure that the k8s cluster has access to the Google Artifact Registry (GAR, preferably use it) or any other registry of your chosen
- Resource limits supported by values passed in
values.yaml
- Release the Service Account used on the Nodes to allow access to Google Object Storage where the .tsv files are stored
It will be necessary to modify the files in the /ssh
folder by inserting the keys of the account used. Also - if a new image is generated - it will be necessary to change the account email configured in values.yaml {} cronjob > script
.
Important
Be careful not to commit the credentials passed at this stage!
After inserting the keys, modify their file attributes:
chmod 644 YOUR-PUBLIC-KEY.pub
chmod 0600 YOUR-PRIV-KEY #(EX: id_ed25519)
When generating the new image, they will be in this path:
/app # cat ~/.ssh/id_ed25519.pub
/app # cat ~/.ssh/id_ed25519
To generate a new image, make sure there is not one already generated on your machine by docker image ls -a
.
- To generate a new image, run in the project folder:
docker build -t cloudcron .
It is important to check if the artifacts were generated correctly, if you want to run the container locally, run:
docker run -it cloudcron
Now, we will generate a new tag for the container and upload it to our Registry. For that:
- Enter your registry in gcloud settings (ex:
gcloud auth configure-docker us-east4-docker.pkg.dev
) - If installing in a new, recently created cluster, it will be necessary to configure access to the repository for the cluster's SA, to do this run:
gcloud artifacts repositories add-iam-policy-binding REGISTRY-REPO-NAME/ \
--location=REGISTRY-LOCATION \
--member=serviceAccount:[email protected] \
--role="roles/artifactregistry.reader"
Now, apply the new tag to your image. We use the registry initially configured in this example:
docker tag cloudcron us-docker.pkg.dev/PROJECTNAME/REPONAME/cloudcron:<NEW-VERSION>
docker push us-docker.pkg.dev/PROJECTNAME/REPONAME/cloudcron:<NEW-VERSION>
The
cloudcron
namespace was used, change it to the desired one.
Update the image value in your values.yaml
to the new image tag (if you generated one);
cd k8s/cloudcron-app
helm template cloudcron-app ./. --values values.yaml
Validate whether the changed information is as desired; Then, apply the upgrade/installation:
helm <upgrade|install> cloudcron-app ./. --namespace cloudcron
If you have changed the Service Account or it is a new installation, you will need to add it as a secret in the cluster:
If you have changed it, run it first:
kubectl delete secret/gcs-sa-key -n cloudcron
Once this is done, apply the SA Key file generated in the cluster as secret:
kubectl create secret generic gcs-sa-key --from-file=SA-JSON-FILE.json=SA-JSON-FILE.json -n cloudcron
Preferably, upload a zoneids.tsv
into your bucket with few entries - two are enough - so that the test doesn't take too long.
If you haven't changed any names in the Helm charts, run the command below to create a Job and run the flow:
kubectl create job --from=cronjob/cloudcron-app-job cloudcron-jobtrial -n cloudcron
Tip
Logs can be monitored via: kubectl logs -f container-name
In the current cluster/namespace. You could also try Coroot for metrics and log collection.
Once successfully validated, a folder with the current date will have been created in the path /generated
.
After that, perform a git pull
in your local project, delete the generated artifacts and commit to main again, to ensure that there are no incomplete backups in the project!
Warning
Updated List Any resources not listed are currently not supported.