You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When checking the metrics being exported by the pod, it will on occasion not pull through all the metrics with probe_success being 0.0
The host itself is fine and as soon as you refresh, all the metrics gets pulled through.
This also happens when I curl localhost on the pod itself. This causes alerts on our monitoring systems when the node itself is fine.
Example output when the metrics don't come through.
HELP python_gc_objects_collected_total Objects collected during gc
TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 435.0
python_gc_objects_collected_total{generation="1"} 12.0
python_gc_objects_collected_total{generation="2"} 0.0
HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
HELP python_gc_collections_total Number of times this generation was collected
TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 64.0
python_gc_collections_total{generation="1"} 5.0
python_gc_collections_total{generation="2"} 0.0
HELP python_info Python platform information
TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="8",patchlevel="10",version="3.8.10"} 1.0
HELP process_virtual_memory_bytes Virtual memory size in bytes.
TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 3.264512e+07
HELP process_resident_memory_bytes Resident memory size in bytes.
TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 2.6468352e+07
HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
TYPE process_start_time_seconds gauge
process_start_time_seconds 1.71146154557e+09
HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 802.16
HELP process_open_fds Number of open file descriptors.
TYPE process_open_fds gauge
process_open_fds 9.0
HELP process_max_fds Maximum number of open file descriptors.
TYPE process_max_fds gauge
process_max_fds 1.048576e+06
HELP citrixadc_probe_success probe_success
TYPE citrixadc_probe_success gauge
citrixadc_probe_success{nsip="pl2-ns-dmz2"} 0.0"
To Reproduce
Steps to reproduce the behavior:
Steps - curl localhost:8888 on the pod multiple times until you notice the metrics not being pulled through.
Version of the metrics exporter - 1.4.9
Version of the Citrix ADC MPX/VPX/CPX - NS13.0 92.21.nc
Logs from the metrics exporter
Expected behavior
All metrics pulled through all the time.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
When checking the metrics being exported by the pod, it will on occasion not pull through all the metrics with probe_success being 0.0
The host itself is fine and as soon as you refresh, all the metrics gets pulled through.
This also happens when I curl localhost on the pod itself. This causes alerts on our monitoring systems when the node itself is fine.
Example output when the metrics don't come through.
HELP python_gc_objects_collected_total Objects collected during gc
TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 435.0
python_gc_objects_collected_total{generation="1"} 12.0
python_gc_objects_collected_total{generation="2"} 0.0
HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
HELP python_gc_collections_total Number of times this generation was collected
TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 64.0
python_gc_collections_total{generation="1"} 5.0
python_gc_collections_total{generation="2"} 0.0
HELP python_info Python platform information
TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="8",patchlevel="10",version="3.8.10"} 1.0
HELP process_virtual_memory_bytes Virtual memory size in bytes.
TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 3.264512e+07
HELP process_resident_memory_bytes Resident memory size in bytes.
TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 2.6468352e+07
HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
TYPE process_start_time_seconds gauge
process_start_time_seconds 1.71146154557e+09
HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 802.16
HELP process_open_fds Number of open file descriptors.
TYPE process_open_fds gauge
process_open_fds 9.0
HELP process_max_fds Maximum number of open file descriptors.
TYPE process_max_fds gauge
process_max_fds 1.048576e+06
HELP citrixadc_probe_success probe_success
TYPE citrixadc_probe_success gauge
citrixadc_probe_success{nsip="pl2-ns-dmz2"} 0.0"
To Reproduce
Steps to reproduce the behavior:
Expected behavior
All metrics pulled through all the time.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: