Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to scrape NodeExporter metrics via GrafanaAgent in mimir-distributed helm chart #10640

Open
piano-man opened this issue Feb 13, 2025 · 0 comments

Comments

@piano-man
Copy link

piano-man commented Feb 13, 2025

As the title states, I deployed a mimir-distributed helm chart in my EKS k8s cluster (with metamonitoring) and realised that there was no ServiceMonitor for node-exporter metrics.

I currently have a node-exporter service defined in my cluster as such:

Name:                     node-exporter
Namespace:                default
Labels:                   app=node-exporter
                          app.kubernetes.io/name=node-exporter
                          name=node-exporter
Annotations:              <none>
Selector:                 app=node-exporter
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.20.16.120
IPs:                      172.20.16.120
Port:                     <unset>  9100/TCP
TargetPort:               9100/TCP
Endpoints:                ..... <IPs of endpoints>
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

I created a servicemonitor as such:

Name:         mimir-prod-node-exporter-servicemonitor
Namespace:    mimir
Labels:       app.kubernetes.io/component=meta-monitoring
              app.kubernetes.io/instance=mimir-prod
              app.kubernetes.io/name=mimir
Annotations:  <none>
API Version:  monitoring.coreos.com/v1
Kind:         ServiceMonitor
Metadata:
  Creation Timestamp:  2025-02-13T07:36:08Z
  Generation:          1
  Resource Version:    610512473
  UID:                 <>
Spec:
  Endpoints:
    Honor Labels:  true
    Path:          /metrics
    Port:          9100
    Relabelings:
      Action:        replace
      Replacement:   node-exporter
      Target Label:  source
      Action:        replace
      Source Labels:
        node
      Target Label:  instance
      Action:        replace
      Replacement:   mimir-prod
      Target Label:  cluster
      Action:        replace
      Replacement:   node-exporter
      Source Labels:
        job
      Target Label:  job
      Action:        replace
      Source Labels:
        pod
        kubernetes_pod_name
      Target Label:  pod
      Action:        replace
      Replacement:   mimir
      Target Label:  namespace
  Namespace Selector:
    Match Names:
      default
  Selector:
    Match Labels:
      app.kubernetes.io/name:  node-exporter

The service monitor should be able to identify the node-exporter service and grafana agent should be able to forward these metrics.
The other service monitors that came up with the mimir-distributed helm chart all seem to be working and exporting metrics.

Is there any reason why creating a new servicemonitor for node-exporter isn't working here?

I'd appreciate any insights or pointers here since I've tried everything I could think of and I'm unsure why this isn't working.

NOTE: The metaMonitoring section in the values.yaml for the helm chart looks as follows

metaMonitoring:
  serviceMonitor:
    enabled: true
  podMonitor:
    enabled: true
  grafanaAgent:
    enabled: true
    installOperator: true
    metrics:
      remote:
        url: "http://mimir-prod-nginx.mimir.svc:80/api/v1/push"
        headers:
          X-Scope-OrgID: metamonitoring
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant