-
Notifications
You must be signed in to change notification settings - Fork 482
chore(doc): admonition for API saturation risk #4955
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
adrian-salas
wants to merge
1
commit into
grafana:main
Choose a base branch
from
adrian-salas:chore/added-admonition
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+18
−0
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Added caution about Kubernetes API saturation risk when running Alloy as a DaemonSet, along with recommended solutions
Comment on lines
+18
to
37
| {{< admonition type="caution" >}} | ||
| **Kubernetes API Saturation Risk** | ||
|
|
||
| When running Alloy as a DaemonSet with the default configuration, **each Alloy pod will watch logs for all pods in the cluster**. | ||
| This means if you have 20 nodes, each Alloy pod will watch every pod's logs, resulting in substantial load on the Kubernetes API and cluster resources. | ||
| On large or resource-constrained clusters, this can cause excessive API requests, memory usage, and may even prevent new objects from being created. | ||
|
|
||
| #### Recommended Solutions | ||
|
|
||
| - **Restrict Alloy pods to only collect logs for pods on their local node.** | ||
| See the example in the [Limit to only Pods on the same node](#limit-to-only-pods-on-the-same-node) section below for a configuration snippet that uses label selectors and environment variables to achieve this. | ||
| - **Clustering mode:** For larger deployments, consider setting up Alloy in clustering mode. | ||
| - **Monitor resource consumption:** Regularly check API server throttling, memory usage, and inflight requests, especially on cloud-managed clusters (e.g., Azure AKS). | ||
|
|
||
| Failure to properly configure Alloy can result in degraded cluster performance, increased cloud costs, and operational risk. | ||
| Please review your configuration carefully and consult the examples below. | ||
| {{< /admonition >}} | ||
|
|
||
| If you supply no connection information, this component defaults to an in-cluster configuration. | ||
| A kubeconfig file or manual connection settings can be used to override the defaults. |
Contributor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested change
| {{< admonition type="caution" >}} | |
| **Kubernetes API Saturation Risk** | |
| When running Alloy as a DaemonSet with the default configuration, **each Alloy pod will watch logs for all pods in the cluster**. | |
| This means if you have 20 nodes, each Alloy pod will watch every pod's logs, resulting in substantial load on the Kubernetes API and cluster resources. | |
| On large or resource-constrained clusters, this can cause excessive API requests, memory usage, and may even prevent new objects from being created. | |
| #### Recommended Solutions | |
| - **Restrict Alloy pods to only collect logs for pods on their local node.** | |
| See the example in the [Limit to only Pods on the same node](#limit-to-only-pods-on-the-same-node) section below for a configuration snippet that uses label selectors and environment variables to achieve this. | |
| - **Clustering mode:** For larger deployments, consider setting up Alloy in clustering mode. | |
| - **Monitor resource consumption:** Regularly check API server throttling, memory usage, and inflight requests, especially on cloud-managed clusters (e.g., Azure AKS). | |
| Failure to properly configure Alloy can result in degraded cluster performance, increased cloud costs, and operational risk. | |
| Please review your configuration carefully and consult the examples below. | |
| {{< /admonition >}} | |
| If you supply no connection information, this component defaults to an in-cluster configuration. | |
| A kubeconfig file or manual connection settings can be used to override the defaults. | |
| If you supply no connection information, this component defaults to an in-cluster configuration. | |
| A kubeconfig file or manual connection settings can be used to override the defaults. | |
| ## Performance considerations | |
| By default, `discovery.kubernetes` discovers resources across all namespaces in your cluster. | |
| In DaemonSet deployments, this means every {{< param "PRODUCT_NAME" >}} Pod watches all resources, which can increase API server load. | |
| For better performance and reduced API load: | |
| - Use the [`namespaces`](#namespaces) block to limit discovery to specific namespaces. | |
| - Use [`selectors`](#selectors) to filter resources by labels or fields. | |
| - Consider the node-local example in [Limit to only Pods on the same node](#limit-to-only-pods-on-the-same-node). | |
| - Use clustering mode for larger deployments to distribute the discovery load. | |
| - Monitor API server metrics like request rate, throttling, and memory usage, especially on managed clusters. |
How about this? We simplify the information and give the reader some really clear things they can do to handle the performance issues. This is active (gives specific and clear steps) and I think says the same thing as the Caution did.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Added caution about Kubernetes API saturation risk when running Alloy as a DaemonSet, along with recommended solutions
PR Description
Which issue(s) this PR fixes
Proposition for #4787 #4793
PR Checklist