title | summary | category |
---|---|---|
Deploy TiDB on General Kubernetes |
Learn how to deploy a TiDB cluster on general Kubernetes. |
how-to |
This document describes how to deploy a TiDB cluster on general Kubernetes.
- Complete deploying TiDB Operator.
Refer to the TidbCluster example and API documentation to complete TidbCluster Custom Resource (CR), and save it to the <cluster-name>/tidb-cluster.yaml
file. Switch the TidbCluster example and API documentation to the currently used version of TiDB Operator.
Note that TidbCluster CR has multiple parameters for the image configuration:
-
spec.version
: the format isimageTag
, such asv3.1.0
-
spec.<pd/tidb/tikv/pump>.baseImage
: the format isimageName
, such aspingcap/tidb
-
spec.<pd/tidb/tikv/pump>.version
: the format isimageTag
, such asv3.1.0
-
spec.<pd/tidb/tikv/pump>.image
: the format isimageName:imageTag
, such aspingcap/tidb:v3.1.0
The priority for acquiring the image configuration is as follows:
spec.<pd/tidb/tikv/pump>.baseImage
+ spec.<pd/tidb/tikv/pump>.version
> spec.<pd/tidb/tikv/pump>.baseImage
+ spec.version
> spec.<pd/tidb/tikv/pump>.image
Usually, components in a cluster are in the same version. It is recommended to configure spec.<pd/tidb/tikv/pump>.baseImage
and spec.version
.
The modified configuration is not automatically applied to the TiDB cluster by default. The new configuration file is loaded only when the Pod restarts.
It is recommended that you set spec.configUpdateStrategy
to RollingUpdate
to enable automatic update of configurations. This way, every time the configuration is updated, all components are rolling updated automatically, and the modified configuration is applied to the cluster.
To deploy TiDB cluster monitor, refer to the TidbMonitor example and API documentation to complete TidbMonitor CR, and save it to the <cluster-name>/tidb-monitor.yaml
file. Please switch the TidbMonitor example and API documentation to the currently used version of TiDB Operator.
- For the production environment, local storage is recommended. The actual local storage in Kubernetes clusters might be sorted by disk types, such as
nvme-disks
andsas-disks
. - For the demonstration environment or functional verification, you can use network storage, such as
ebs
andnfs
.
Different components of a TiDB cluster have different disk requirements. Before deploying a TiDB cluster, select the appropriate storage class for each component according to the storage classes supported by the current Kubernetes cluster and usage scenario.
You can set the storage class by modifying storageClassName
of each component in <cluster-name>/tidb-cluster.yaml
and <cluster-name>/tidb-monitor.yaml
. For the storage classes supported by the Kubernetes cluster, check with your system administrator.
Note:
If you set a storage class that does not exist in the TiDB cluster that you are creating, then the cluster creation goes to the Pending state. In this situation, you must destroy the TiDB cluster in Kubernetes.
The deployed cluster topology by default has 3 PD Pods, 3 TiKV Pods, and 2 TiDB Pods. In this deployment topology, the scheduler extender of TiDB Operator requires at least 3 nodes in the Kubernetes cluster to provide high availability.
If the number of Kubernetes cluster nodes is less than 3, 1 PD Pod goes to the Pending state, and neither TiKV Pods nor TiDB Pods are created.
When the number of nodes in the Kubernetes cluster is less than 3, to start the TiDB cluster, you can reduce both the number of PD Pods and the number of TiKV Pods in the default deployment to 1
.
After you deploy and configure TiDB Operator, deploy the TiDB cluster by the following steps:
-
Create
Namespace
:{{< copyable "shell-regular" >}}
kubectl create namespace <namespace>
Note:
A
namespace
is a virtual cluster backed by the same physical cluster. You can give it a name that is easy to memorize, such as the same name ascluster-name
. -
Deploy the TiDB cluster:
{{< copyable "shell-regular" >}}
kubectl apply -f <cluster-name> -n <namespace>
-
View the Pod status:
{{< copyable "shell-regular" >}}
kubectl get po -n <namespace> -l app.kubernetes.io/instance=<cluster-name>
You can use TiDB Operator to deploy and manage multiple TiDB clusters in a single Kubernetes cluster by repeating the above procedure and replacing cluster-name
with a different name.
Different clusters can be in the same or different namespace
, which is based on your actual needs.
If you want to initialize your cluster after deployment, refer to Initialize a TiDB Cluster in Kubernetes.