To view data from multiple clusters simultaneously, Kubecost cluster federation must be enabled. This document walks through the necessary steps for enabling this feature.
Note: This feature today requires an Enterprise license.
- Follow steps here to enable long-term storage.
- Ensure
remoteWrite.postgres.installLocal
is set totrue
in values.yaml - Provide a unique identifier for your cluster in
prometheus.server.global.exernal_labels.cluster_id
- Create a service definition to make Postgres accessible by your other clusters. Below is a sample service definition.
Warning: this specific service definition may expose your database externally with just basic auth protecting. Be sure the follow the necessary guidelines of your organization.
apiVersion: v1
kind: Service
metadata:
labels:
app: cost-analyzer
app.kubernetes.io/instance: kubecost
app.kubernetes.io/name: cost-analyzer
name: pgprometheus-remote
namespace: kubecost
spec:
ports:
- name: server
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres
type: LoadBalancer
- Helm upgrade with the new values.
Following these steps for clusters that send data to the master cluster:
- Same as you did for the master, follow steps here to enable long-term storage.
- Set
remoteWrite.postgres.installLocal
tofalse
in values.yaml so you do not redeploy Postgres in this cluster. - Set
prometheus.server.global.exernal_labels.cluster_id
to any unique identifier of your cluster, e.g. dev-cluster-7. - Set
prometheus.remoteWrite.postgres.remotePostgresAddress
to the externally accessible IP from master cluster. - Ensure
postgres.auth.password
is updated to reflect the value set at the master. - Helm upgrade with the new values.
Connect to the master cluster and complete the folllowing:
Visit this endpoint http://<master-kubecost-address>/model/costDataModelRangeLarge
Here’s an example use: http://localhost:9090/model/costDataModelRangeLarge
You should see data with both cluster_id
values in this response.
-
Follow steps here to enable Thanos durable storage on a Master cluster.
-
Complete the process in Step 1 for each additional secondary cluster by reusing your existing storage bucket and access credentials. Except you should not deploy multiple instances of
thanos-compact
. You can optionally deploythanos-bucket
in each additional cluster but it is not required. These modules can easily be disabled in thanos/values.yaml or by passing these parameters directly via helm install or upgrade as follows:
--set thanos.compact.enabled=false --set thanos.bucket.enabled=false
You can also optionally disable thanos.store
and thanos.query
with thanos/values.yaml or with these flags:
--set thanos.query.enabled=false --set thanos.store.enabled=false
Clusters with store/query disabled will only have access to their metrics but will still write to the global bucket.
-
Ensure you provide a unique identifier for
prometheus.server.global.external_labels.cluster_id
to have additional clusters be visible in the Kubecost product, e.g.cluster-two
. -
Follow the same verification steps available here.