You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we exclude an Elastic resource from being managed by the operator using the eck.k8s.elastic.co/managed=false annotation (documentation), the expectation is the different controllers will totally ignore the cluster.
What I have observed is the cluster still sends network requests to unmanaged clusters
We mark clusters as unmanaged when we want to scale down pods without data loss. Requests made by the operator to the cluster elasticsearch services generates a lot of ICMP denials
Could you explain why do you need to pause reconciliations in order to scale down "without data loss"
We need to scale down "without data loss". This is possible without pausing reconciliation. We can scale the es statefulset created by the elasticsearch.k8s.elastic.co/v1.ElasticSearch down to zero. This is why elasticsearch is unreachable
We need to pause reconciliation as otherwise the cloud-on-k8s controllers will perform network requests on the cluster.
We have also noticed the operator using more CPU as it still considered the resource under its management (was probably reconciling much more often, this is a guess)
When we exclude an Elastic resource from being managed by the operator using the
eck.k8s.elastic.co/managed=false
annotation (documentation), the expectation is the different controllers will totally ignore the cluster.What I have observed is the cluster still sends network requests to unmanaged clusters
Why is it an issue in our use case ?
We mark clusters as unmanaged when we want to scale down pods without data loss. Requests made by the operator to the cluster elasticsearch services generates a lot of ICMP denials
at the network layer
The text was updated successfully, but these errors were encountered: