diff --git a/charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md b/charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md index e314b5573..284207361 100644 --- a/charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md +++ b/charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md @@ -2,7 +2,34 @@ (NOTE: Do not edit README.md directly. It is a generated file!) ( To make changes, please modify values.yaml or description.txt and run `make examples`) --> -# Example: scalability/sharded-kube-state-metrics/values.yaml +# Sharded kube-state-metrics + +This example demonstrates how to [shard kube-state-metrics](https://github.com/kubernetes/kube-state-metrics#scaling-kube-state-metrics) +to improve scalability. This is useful when your Kubernetes cluster has a large number of objects and kube-state-metrics +is struggling to keep up. The symptoms of this might be: + +* It takes longer than 60 seconds to scrape kube-state-metrics, which is longer than the scrape interval. +* The sheer amount of metric data coming from kube-state-metrics is causing Alloy to spike its required resources. +* kube-state-metrics itself might not be able to keep up with the number of objects in the cluster. + +By increasing the number of replicas and enabling [automatic sharding](https://github.com/kubernetes/kube-state-metrics#automated-sharding), +kube-state-metrics will automatically distribute the resources on the cluster across the shards. + +## Changing replicas + +Whenever the number of replicas changes, there are two scenarios to consider. Your requirements will dictate which one +is best for you. + +### RollingUpdate + +If the deployment strategy is set to `RollingUpdate`, when kube-state-metrics is updated it is possible for there to be +two running instances for a short period. This means that there shouldn't be a gap in metrics, but could lead to +duplicate metrics for a short period. + +### Recreate + +However, if the deployment strategy is set to `Recreate`, the old kube-state-metrics pod is terminated before the new +one is started. This means that there will be a gap in metrics while the new pod is starting. ## Values diff --git a/charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/description.txt b/charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/description.txt new file mode 100644 index 000000000..e964dec2c --- /dev/null +++ b/charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/description.txt @@ -0,0 +1,28 @@ +# Sharded kube-state-metrics + +This example demonstrates how to [shard kube-state-metrics](https://github.com/kubernetes/kube-state-metrics#scaling-kube-state-metrics) +to improve scalability. This is useful when your Kubernetes cluster has a large number of objects and kube-state-metrics +is struggling to keep up. The symptoms of this might be: + +* It takes longer than 60 seconds to scrape kube-state-metrics, which is longer than the scrape interval. +* The sheer amount of metric data coming from kube-state-metrics is causing Alloy to spike its required resources. +* kube-state-metrics itself might not be able to keep up with the number of objects in the cluster. + +By increasing the number of replicas and enabling [automatic sharding](https://github.com/kubernetes/kube-state-metrics#automated-sharding), +kube-state-metrics will automatically distribute the resources on the cluster across the shards. + +## Changing replicas + +Whenever the number of replicas changes, there are two scenarios to consider. Your requirements will dictate which one +is best for you. + +### RollingUpdate + +If the deployment strategy is set to `RollingUpdate`, when kube-state-metrics is updated it is possible for there to be +two running instances for a short period. This means that there shouldn't be a gap in metrics, but could lead to +duplicate metrics for a short period. + +### Recreate + +However, if the deployment strategy is set to `Recreate`, the old kube-state-metrics pod is terminated before the new +one is started. This means that there will be a gap in metrics while the new pod is starting.