Skip to content
This repository has been archived by the owner on Mar 7, 2023. It is now read-only.

Metrics incorrectly being reported for pods that don't have any ports #13

Open
patelrit opened this issue Jun 21, 2020 · 0 comments
Open
Labels
bug Something isn't working

Comments

@patelrit
Copy link

patelrit commented Jun 21, 2020

I installed kube-netc in on of our clusters and checked the metrics. I noticed that bytes_recv being reported for pods that don't have any ports.

e.g. nirmata-cni-installer-5fkmp pod is part of a daemonset and does not have any ports configured. Also 10.10.1.210:2379 is actually etcd container running on the node (outside kubernetes)

Also, another observation is that there are multiple records between the same source and destination IP. This increases the size of the metrics data creating scale issues with prometheus.

bytes_recv{destination_address="10.10.1.210:2379",destination_pod_name="nirmata-cni-installer-5fkmp",source_address="10.10.1.210:49080",source_pod_name="nirmata-cni-installer-5fkmp"} 459
bytes_recv{destination_address="10.10.1.210:2379",destination_pod_name="nirmata-cni-installer-5fkmp",source_address="10.10.1.210:49082",source_pod_name="nirmata-cni-installer-5fkmp"} 1.791227e+06
bytes_recv{destination_address="10.10.1.210:2379",destination_pod_name="nirmata-cni-installer-5fkmp",source_address="10.10.1.210:49084",source_pod_name="nirmata-cni-installer-5fkmp"} 787
bytes_recv{destination_address="10.10.1.210:2379",destination_pod_name="nirmata-cni-installer-5fkmp",source_address="10.10.1.210:49090",source_pod_name="nirmata-cni-installer-5fkmp"} 29955
bytes_recv{destination_address="10.10.1.210:2379",destination_pod_name="nirmata-cni-installer-5fkmp",source_address="10.10.1.210:49092",source_pod_name="nirmata-cni-installer-5fkmp"} 2026

DaemonSet spec (partial)
spec:
containers:
- image: index.docker.io/nirmata/nirmata-cni-installer:1.10
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- cat
- /run.sh
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: install-cni
readinessProbe:
exec:
command:
- cat
- /run.sh
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/cni/bin/
name: cni-bin
dnsPolicy: ClusterFirst
hostNetwork: true
imagePullSecrets:
- name: default-registry-secret
initContainers:
- command:
- chown
- -R
- 1000:1000
- /opt/cni/bin/
image: alpine:3.6
imagePullPolicy: IfNotPresent
name: take-data-dir-ownership
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/cni/bin/
name: cni-bin
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- hostPath:
path: /opt/cni/bin
type: ""
name: cni-bin
updateStrategy:
type: OnDelete

@drewrip drewrip added the bug Something isn't working label Jun 21, 2020
@patelrit patelrit changed the title Metrics incorrectly being reported for pods that node have any ports Metrics incorrectly being reported for pods that don't have any ports Jan 5, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants