-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve resource association #11
Comments
I would love to see this! I have a use-case, where I would like to scale based on request count coming to an Ingress:
I could use such a query to get the request counts for an Ingress: {
"query": {
"bool": {
"must": [
{
"exists": {
"field": "prometheus.metrics.nginx_ingress_controller_requests"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "now-3m",
"lte": "now"
}
}
}
]
}
},
"size": 0,
"aggs": {
"by_ingress": {
"terms": {
"field": "prometheus.labels.ingress",
"size": 100
},
"aggs": {
"by_date": {
"date_histogram": {
"field": "@timestamp",
"calendar_interval": "minute",
"order": {
"_key": "asc"
}
},
"aggs": {
"requests": {
"max": {
"field": "prometheus.metrics.nginx_ingress_controller_requests"
}
},
"request_rate": {
"derivative": {
"buckets_path": "requests"
}
}
}
}
}
}
}
} The result: {
"took": 4,
"timed_out": false,
"_shards": {
"total": 2,
"successful": 2,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 9,
"relation": "eq"
},
"max_score": null,
"hits": []
},
"aggregations": {
"by_ingress": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "prod-app",
"doc_count": 6,
"by_date": {
"buckets": [
{
"key_as_string": "2022-08-23T11:38:00.000Z",
"key": 1661254680000,
"doc_count": 2,
"requests": {
"value": 11189
}
},
{
"key_as_string": "2022-08-23T11:39:00.000Z",
"key": 1661254740000,
"doc_count": 2,
"requests": {
"value": 11189
},
"request_rate": {
"value": 0
}
},
{
"key_as_string": "2022-08-23T11:40:00.000Z",
"key": 1661254800000,
"doc_count": 2,
"requests": {
"value": 11189
},
"request_rate": {
"value": 0
}
}
]
}
},
{
"key": "stage-app",
"doc_count": 3,
"by_date": {
"buckets": [
{
"key_as_string": "2022-08-23T11:38:00.000Z",
"key": 1661254680000,
"doc_count": 1,
"requests": {
"value": 2
}
},
{
"key_as_string": "2022-08-23T11:39:00.000Z",
"key": 1661254740000,
"doc_count": 1,
"requests": {
"value": 2
},
"request_rate": {
"value": 0
}
},
{
"key_as_string": "2022-08-23T11:40:00.000Z",
"key": 1661254800000,
"doc_count": 1,
"requests": {
"value": 2
},
"request_rate": {
"value": 0
}
}
]
}
}
]
}
}
} So there is a question how to associate the fields in the ES query result to the K8s resources. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
For now metrics are only associated with
Pod
resources:elasticsearch-k8s-metrics-adapter/pkg/client/elasticsearch/discovery.go
Lines 195 to 200 in 39a36d9
Also namespace and Pod's name fields are hardcoded in the Elasticsearch query:
Not all metrics make sense in the context of Pods. For example it does not make sense to expose
system.memory.free
as it is not possible to associate this metric to a specific Pod. The first approach so far has been to filter on the metric name and assume documents also have thekubernetes.namespace
andkubernetes.pod.name
fields:We then assume that we get that kind of document from Elasticsearch:
We may want to improve this association mechanism, for example by allowing other resources to be associated with a given metric. Users may also want to use other fields than
kubernetes.namespace
orkubernetes.pod.name
to query for a given metric.See here how associations are managed with the Prometheus adapter.
The text was updated successfully, but these errors were encountered: