A cert-manager ACME DNS01 solver webhook for DNSimple.
- cert-manager >= 1.0.0 (The Helm chart uses the new API versions)
- Kubernetes >= 1.17.x
- Helm 3 (otherwise adjust the example below accordingly)
-
Take note of your DNSimple API token from the account settings in the automation tab.
-
Add the helm repo published under the Github pages deployment of this repository:
$ helm repo add certmanager-webhook https://puzzle.github.io/cert-manager-webhook-dnsimple
-
Install the application, replacing the API token and email placeholders:
$ helm repo add certmanager-webhook https://puzzle.github.io/cert-manager-webhook-dnsimple $ helm install cert-manager-webhook-dnsimple \ --dry-run \ # remove once you are sure the values are correct --namespace cert-manager \ --set dnsimple.token='<DNSIMPLE_API_TOKEN>' \ --set clusterIssuer.production.enabled=true \ --set clusterIssuer.staging.enabled=true \ --set clusterIssuer.email=<ISSUER_MAIL> \ certmanager-webhook/cert-manager-webhook-dnsimple
Alternatively you can check out this repository and substitute the source of the install command with
./charts/cert-manager-webhook-dnsimple
. -
Afterwards you can issue a certificate:
$ cat << EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: dnsimple-test spec: dnsNames: - test.example.com issuerRef: name: cert-manager-webhook-dnsimple-staging kind: ClusterIssuer secretName: dnsimple-test-tls EOF
The Helm chart accepts the following values:
name | required | description | default value |
---|---|---|---|
dnsimple.token |
✔️ | DNSimple API Token | empty |
dnsimple.accountID |
DNSimple Account ID (required when dnsimple.token is a user-token) |
empty | |
clusterIssuer.email |
LetsEncrypt Admin Email | empty | |
clusterIssuer.production.enabled |
Create a production ClusterIssuer |
false |
|
clusterIssuer.staging.enabled |
Create a staging ClusterIssuer |
false |
|
image.repository |
✔️ | Docker image for the webhook solver | ghcr.io/puzzle/cert-manager-webhook-dnsimple |
image.tag |
✔️ | Docker image tag of the solver | latest tagged docker build |
image.pullPolicy |
✔️ | Image pull policy of the solver | IfNotPresent |
logLevel |
Set the verbosity of the solver | empty | |
useUnprivilegedPort |
Use an unprivileged container-port for the webhook | true |
|
groupName |
✔️ | Name of the API group used to register the webhook API service as | acme.dnsimple.com |
certManager.namespace |
✔️ | The namespace cert-manager was installed to | cert-manager |
certManager.serviceAccountName |
✔️ | The service account cert-manager runs under | cert-manager |
All cert-manager webhooks have to pass the DNS01 provider conformance testing suite.
Prerequisites for PRs are implemented as GitHub-actions. All tests should pass before a PR is merged:
- the
cert-manager
conformance suite is run with provided kubebuilder fixtures - a custom test suite running on a working k8s cluster (using
minikube
) is executed as well
You can also run tests locally, as specified in the Makefile
:
- Set-up
testdata/
according to its README.dnsimple-token.yaml
should be filled with a valid token (for either the sandbox or production environment)dnsimple.env
should contain the remaining environment variables (non sensitive)
- Execute the test suite:
make test
- Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
- Install the webhook:
helm install cert-manager-webhook-dnsimple \ --namespace cert-manager \ --set dnsimple.token='<DNSIMPLE TOKEN>' \ --set clusterIssuer.staging.enabled=true \ ./charts/cert-manager-webhook-dnsimple
- Test away... You can create a sample certificate to ensure the webhook is working correctly:
kubectl apply -f - <<<EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: dnsimple-test spec: dnsNames: - test.example.com issuerRef: name: cert-manager-webhook-dnsimple-staging kind: ClusterIssuer secretName: dnsimple-test-tls EOF
Every push to master
or on a pull-request triggers the upload of a new docker image to the GitHub Container Registry (this is configured through github actions). These images should not considered stable and are tagged with commit-<hash>
. We recommend using a specific version tag for production deployments instead.
Tagged images are considered stable, these are the ones referenced by the default helm values.
Create a new tag and push it to the repository. This will trigger a new container build:
git tag -a v0.1.0 -m "Release v0.1.0"
git push origin v0.1.0
We recommend the following versioning scheme: vX.Y.Z
where X
is the major version, Y
the minor version and Z
the patch version.
Helm charts are only released when significant changes occur. We encourage users to update the underlying image versions on their own. A new release can be triggered manually under the actions tab and running helm-release
. This only works if a new version was specified in the Chart.yaml
. The new release will be appended to the Github pages deployment.
We welcome contributions. Please open an issue or a pull request.