Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NodeLocal DNS Cache Intercepts all dns queiris #630

Closed
yahalomimaor opened this issue May 6, 2024 · 9 comments
Closed

NodeLocal DNS Cache Intercepts all dns queiris #630

yahalomimaor opened this issue May 6, 2024 · 9 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.

Comments

@yahalomimaor
Copy link

I have deployed the NodeLocal DNS Cache daemonset in my cluster (k8s-dns-node-cache:1.22.28)
Im running some DNS queriers from different pods which are located on the same node with the DNS-cache daemonset.
and checked the logs in the daemonset,
I was able to see the DNS queriers from all the pods running on the same node,
Even pods which are not configured to work with the local DNS cache (169.254.20.10)
pods that have resolve.conf file, with default nameserver configured [coreDNS] (Using ClusterFirst DnsPolicy).

  1. Is that make sense that DNS-cache daemonset is intercepting all DNS the traffic on the node?
  2. Does each pod should be configured explicitly to work with local-DNS-cache server using dns policy?

Thanks.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 6, 2024
@k8s-ci-robot
Copy link
Contributor

There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:

  • /sig <group-name>
  • /wg <group-name>
  • /committee <group-name>

Please see the group list for a listing of the SIGs, working groups, and committees available.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@aojea
Copy link
Member

aojea commented May 6, 2024

It is explained here in more detail https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/ , can you explain what are your expectations on what should work differently?

@yahalomimaor
Copy link
Author

Sure,
I have to following config in my testing pod,
resolve

As you can see the nameserver which is configured is 172.20.0.10 (CoreDNS IP)
and im using ClusterFirst DNS policy which means : "use cluster DNS first, CoreDNS. If the DNS query does not match any domains in cluster DNS, forward it to upstream DNS servers"

So basically im expecting the query to be forward to cordeDNS service.
But when im looking at the logs of the Local-DNS-Cache pod (which runs on the same node with the testing pod)
I can see these quires which sent from the testing pod to the cordeDNS service.

Now the question is:
Why do i see traffic from my testing pod to cordeDNS service, in the Local-DNS-Cache pod?
When the traffic is not even destined to the local-cache.

Does Local-DNS-Cache daemonset, intercept all the dns traffic on the node, even if its destined to cordeDNS service?
if yes, so how it being done?

Thanks,

@aojea
Copy link
Member

aojea commented May 12, 2024

/transfer kubernetes/dns
/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label May 12, 2024
@k8s-ci-robot
Copy link
Contributor

@aojea: Something went wrong or the destination repo kubernetes/kubernetes/dns does not exist.

In response to this:

/transfer kubernetes/dns
/kind support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@aojea
Copy link
Member

aojea commented May 12, 2024

/transfer dns

@k8s-ci-robot k8s-ci-robot transferred this issue from kubernetes/enhancements May 12, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 10, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 9, 2024
@yahalomimaor
Copy link
Author

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants