Let's take a look on example, which creates a cluster with 2 shards and 2 replicas and persistent storage.
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: "repl-05"
spec:
defaults:
templates:
dataVolumeClaimTemplate: default
podTemplate: clickhouse:19.6
configuration:
zookeeper:
nodes:
- host: zookeeper.zoo1ns
clusters:
- name: replicated
layout:
shardsCount: 2
replicasCount: 2
templates:
volumeClaimTemplates:
- name: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
podTemplates:
- name: clickhouse:19.6
spec:
containers:
- name: clickhouse-pod
image: clickhouse/clickhouse-server:23.8
Operator provides set of macros, which are:
{installation}
-- ClickHouse Installation name{cluster}
-- primary cluster name{replica}
-- replica name in the cluster, maps to pod service name{shard}
-- shard id
ClickHouse also supports internal macros {database}
and {table}
that maps to current database and table respectively.
Now we can create replicated table, using specified macros
CREATE TABLE events_local on cluster '{cluster}' (
event_date Date,
event_type Int32,
article_id Int32,
title String
) engine=ReplicatedMergeTree('/clickhouse/{installation}/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_type, article_id);
CREATE TABLE events on cluster '{cluster}' AS events_local
ENGINE = Distributed('{cluster}', default, events_local, rand());
We can generate some data:
INSERT INTO events SELECT today(), rand()%3, number, 'my title' FROM numbers(100);
And check how these data are distributed over the cluster
SELECT count() FROM events;
SELECT count() FROM events_local;