You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using multiple instances of external-dns with different providers, passing an annotation like external-dns.alpha.kubernetes.io/cloudflare-proxied: "false" to a instance of external-dns running with cloudflare will cause the provider to set that specific key in the record. This same annotation gets passed to a webhook provider which isn't supported by provider and on the next reconciliation will be missing these provider-specific options and will force a delete and create of the record, infinitely.
What you expected to happen:
External-DNS should correctly filter in-tree provider-specific annotations to prevent external-dns.alpha.kubernetes.io/cloudflare-* from being passed, and only pass external-dns.kubernetes.io/webhook-* to the webhook provider and vise-versa (see this)
How to reproduce it (as minimally and precisely as possible):
I believe the main spot to implement changes would be external-dns/source/source.go as this is where the provider annotations are fetched, there could be some filtering based on the current selected in-tree provider.
I am able to make these changes in a PR, but request direction on how this should be solved.
Environment:
External-DNS version (use external-dns --version): v0.15.0
I have also just ran into this. When trying to deploy an ingress for my mikrotik-webhook provider that also has cloudflare annotations, the cloudflare specific configuration gets passed to my webhook as well.
I wouldn't have expected the cloudflare-proxied annotation to be passed in as provider-specific configuration to my mikrotik webhook instance.
I have some custom logic implemented in it to validate the provider-specific configuration fields to throw warnings in case of misconfiguration and this instance just throws endless warnings since it gets passed through for every reconciliation loop.
Sample output from my webhook logs when creating that particular ingress:
{"level":"error","msg":"error converting ExternalDNS endpoint to Mikrotik DNS Record: unsupported provider specific configuration 'external-dns.alpha.kubernetes.io/cloudflare-proxied' for DNS Record of type A","time":"2024-12-14T11:36:51Z"}
we can see here that the entire annotation gets passed as provider-specific information
What happened:
When using multiple instances of external-dns with different providers, passing an annotation like
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
to a instance of external-dns running with cloudflare will cause the provider to set that specific key in the record. This same annotation gets passed to a webhook provider which isn't supported by provider and on the next reconciliation will be missing these provider-specific options and will force a delete and create of the record, infinitely.What you expected to happen:
External-DNS should correctly filter in-tree provider-specific annotations to prevent
external-dns.alpha.kubernetes.io/cloudflare-*
from being passed, and only passexternal-dns.kubernetes.io/webhook-*
to the webhook provider and vise-versa (see this)How to reproduce it (as minimally and precisely as possible):
Cloudflare HelmRelease: https://github.com/kashalls/home-cluster/blob/54c571b018fa51d632ec4cd9ad4486b7edc9c858/kubernetes/fenrys/apps/networking/external-dns/cloudflare/helmrelease.yaml
UniFi HelmRelease: https://github.com/kashalls/home-cluster/blob/54c571b018fa51d632ec4cd9ad4486b7edc9c858/kubernetes/fenrys/apps/networking/external-dns/unifi/helmrelease.yaml
Anything else we need to know?:
I believe the main spot to implement changes would be external-dns/source/source.go as this is where the provider annotations are fetched, there could be some filtering based on the current selected in-tree provider.
I am able to make these changes in a PR, but request direction on how this should be solved.
Environment:
external-dns --version
): v0.15.0Helm Chart: 1.15.0
The text was updated successfully, but these errors were encountered: