Skip to content

Commit

Permalink
en: update doc for deploy binlog (pingcap#53)
Browse files Browse the repository at this point in the history
* en: update doc for deploy binlog

* Add update in pingcap#41

* Apply suggestions from Coco

Co-Authored-By: Keke Yi <[email protected]>

* update zh TOC

* Apply suggestions from Daniel

Co-Authored-By: DanielZhangQD <[email protected]>

* Update en/deploy-tidb-binlog.md

Co-Authored-By: DanielZhangQD <[email protected]>

* fix deadlink caused by file name change

* fix deadlink

* Apply suggestions from code review

Co-Authored-By: Lilian Lee <[email protected]>

Co-authored-by: Keke Yi <[email protected]>
Co-authored-by: DanielZhangQD <[email protected]>
Co-authored-by: Lilian Lee <[email protected]>
  • Loading branch information
4 people authored Mar 31, 2020
1 parent 743b13f commit b659a0a
Show file tree
Hide file tree
Showing 9 changed files with 76 additions and 90 deletions.
4 changes: 2 additions & 2 deletions en/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,14 @@
- [TiDB in GCP GKE](deploy-on-gcp-gke.md)
- [TiDB in Alibaba Cloud ACK](deploy-on-alibaba-cloud.md)
- [Access TiDB in Kubernetes](access-tidb.md)
- [Deploy TiDB Binlog](deploy-tidb-binlog.md)
+ Configure
- [Initialize a Cluster](initialize-a-cluster.md)
- [Configure TiDB Using Helm](configure-a-tidb-cluster.md)
- [Configure TiDB Using TidbCluster](configure-cluster-using-tidbcluster.md)
- [Configure Backup](configure-backup.md)
- [Configure Storage Class](configure-storage-class.md)
- [Configure TiDB Binlog Drainer](configure-tidb-binlog-drainer.md)
- [Configure tidb-drainer Chart](configure-tidb-binlog-drainer.md)
- Monitor
- [Monitor TiDB Using Helm](monitor-a-tidb-cluster.md)
- [Monitor TiDB Using TidbMonitor](monitor-using-tidbmonitor.md)
Expand All @@ -41,7 +42,6 @@
- [Restore Data from S3-Compatible Storage](restore-from-s3.md)
- [Restore Data with TiDB Lightning](restore-data-using-tidb-lightning.md)
- [Collect TiDB Logs](collect-tidb-logs.md)
- [Maintain TiDB Binlog](maintain-tidb-binlog.md)
- [Enable Automatic Failover](use-auto-failover.md)
- [Enable Admission Controller](enable-admission-webhook.md)
+ Scale
Expand Down
2 changes: 1 addition & 1 deletion en/backup-and-restore-using-helm-charts.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ The `pingcap/tidb-backup` helm chart helps restore a TiDB cluster using backup d
Incremental backup uses [TiDB Binlog](https://pingcap.com/docs/v3.0/reference/tidb-binlog/overview) to collect binlog data from TiDB and provide near real-time backup and replication to downstream platforms.
For the detailed guide of maintaining TiDB Binlog in Kubernetes, refer to [TiDB Binlog](maintain-tidb-binlog.md).
For the detailed guide of maintaining TiDB Binlog in Kubernetes, refer to [TiDB Binlog](deploy-tidb-binlog.md).
### Scale in Pump
Expand Down
23 changes: 6 additions & 17 deletions en/collect-tidb-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,26 +63,15 @@ For versions prior to 3.0, by default, TiDB prints slow query logs to standard o

In some cases, you may want to use some tools or automated systems to analyze and process the log content. The application log of each TiDB component uses [unified log format](https://github.com/tikv/rfcs/blob/master/text/2018-12-19-unified-log-format.md), which facilitates parsing with other programs. However, because slow query logs use a multi-line format that is compatible with MySQL, it might be difficult to parse slow query logs when they are mixed with application logs.

If you want to separate the slow query logs from the application logs, you can configure the `separateSlowLog` parameter in the `values.yaml` file. This outputs the slow query log to a dedicated bypass container so that it can be stored in a separate file on the host.
If you want to separate the slow query logs from the application logs, you can configure the `spec.tidb.separateSlowLog: true` parameter in the `TidbCluster` CR. This outputs the slow query log to a dedicated sidecar container so that it can be stored in a separate file on the host.

To do this, follow the steps below:
Then you can view the slow query log through the sidecar container named `slowlog`:

1. Modify the `values.yaml` file and set the `separateSlowLog` parameter to `true`:

```yaml
# Uncomment the following line to enable separate output of the slow query log
separateSlowLog: true
```

2. Run `helm upgrade` to apply the configuration.

3. Then you can view the slow query log through the sidecar container named `slowlog`:

{{< copyable "shell-regular" >}}
{{< copyable "shell-regular" >}}

```shell
kubectl logs -n <namespace> <tidbPodName> -c slowlog
```
```shell
kubectl logs -n <namespace> <tidbPodName> -c slowlog
```

For 3.0 and the later versions, TiDB outputs slow query logs to a separate `slowlog.log` file, and `separateSlowLog` is enabled by default, so you can view slow query logs directly from the sidecar container without additional settings.

Expand Down
3 changes: 3 additions & 0 deletions en/configure-tidb-binlog-drainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ The following table contains all configuration parameters available for the `tid

| Parameter | Description | Default Value |
| :----- | :---- | :----- |
| `timezone` | Timezone configuration | `UTC` |
| `drainerName` | The name of `Statefulset` | `""` |
| `clusterName` | The name of the source TiDB cluster | `demo` |
| `clusterVersion` | The version of the source TiDB cluster | `v3.0.1` |
| `baseImage` | The base image of TiDB Binlog | `pingcap/tidb-binlog` |
Expand All @@ -23,6 +25,7 @@ The following table contains all configuration parameters available for the `tid
| `storage` | The storage limit of the drainer Pod. Note that you should set a larger size if `db-type` is set to `pb` | `10Gi` |
| `disableDetect` | Determines whether to disable casualty detection | `false` |
| `initialCommitTs` | Used to initialize a checkpoint if the drainer does not have one | `0` |
| `tlsCluster.enabled` | Whether or not to enable TLS between clusters | `false` |
| `config` | The configuration file passed to the drainer. Detailed reference: [drainer.toml](https://github.com/pingcap/tidb-binlog/blob/master/cmd/drainer/drainer.toml) | (see below) |
| `resources` | The resource limits and requests of the drainer Pod | `{}` |
| `nodeSelector` | Ensures that the drainer Pod is only scheduled to the node with the specific key-value pair as the label. Detailed reference: [nodeselector](https://kubernetes.io/docs/concepts/configuration/assign-Pod-node/#nodeselector) | `{}` |
Expand Down
128 changes: 61 additions & 67 deletions en/maintain-tidb-binlog.md → en/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Maintain TiDB Binlog
summary: Learn how to maintain TiDB Binlog of a TiDB cluster in Kubernetes.
title: Deploy TiDB Binlog
summary: Learn how to deploy TiDB Binlog for a TiDB cluster in Kubernetes.
category: how-to
---

# Maintain TiDB Binlog
# Deploy TiDB Binlog

This document describes how to maintain [TiDB Binlog](https://pingcap.com/docs/v3.0/reference/tidb-binlog/overview) of a TiDB cluster in Kubernetes.

Expand All @@ -13,71 +13,78 @@ This document describes how to maintain [TiDB Binlog](https://pingcap.com/docs/v
- [Deploy TiDB Operator](deploy-tidb-operator.md);
- [Install Helm](tidb-toolkit.md#use-helm) and configure it with the official PingCAP chart.

## Enable TiDB Binlog of a TiDB cluster
## Deploy TiDB Binlog of a TiDB cluster

TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster with TiDB Binlog enabled, or enable TiDB Binlog in an existing TiDB cluster:
TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster with TiDB Binlog enabled, or enable TiDB Binlog in an existing TiDB cluster, take the following steps.

1. Modify the `values.yaml` file as described below:
### Deploy Pump

* Set `binlog.pump.create` to `true`.
* Set `binlog.drainer.create` to `true`.
* Set `binlog.pump.storageClassName` and `binlog.drainer.storageClassName` to an available `storageClass` in your Kubernetes cluster.
* Set `binlog.drainer.destDBType` to your desired downstream storage as needed, which is explained in details below.
1. Modify the TidbCluster CR file to add the Pump configuration.

TiDB Binlog supports three types of downstream storage:
For example:

* PersistenceVolume: the default downstream storage. You can configure a large PV for `drainer` (by modifying `binlog.drainer.storage`) in this case.
* MySQL compatible databases: enabled by setting `binlog.drainer.destDBType` to `mysql`. Meanwhile, you must configure the address and credential of the target database in `binlog.drainer.mysql`.
* Apache Kafka: enabled by setting `binlog.drainer.destDBType` to `kafka`. Meanwhile, you must configure the zookeeper address and Kafka address of the target cluster in `binlog.drainer.kafka`.
```yaml
spec:
...
pump:
baseImage: pingcap/tidb-binlog
version: v3.0.11
replicas: 1
storageClassName: local-storage
requests:
storage: 30Gi
schedulerName: default-scheduler
config:
addr: 0.0.0.0:8250
gc: 7
heartbeat-interval: 2
```
2. Set affinity and anti-affinity for TiDB and the Pump component:
Edit `version`, `replicas`, `storageClassName`, and `requests.storage` according to your cluster.

> **Note:**
>
> If you enable TiDB Binlog in the production environment, it is recommended to set affinity and anti-affinity for TiDB and the Pump component; if you enable TiDB Binlog in a test environment on the internal network, you can skip this step.
2. Set affinity and anti-affinity for TiDB and Pump.

If you enable TiDB Binlog in the production environment, it is recommended to set affinity and anti-affinity for TiDB and the Pump component; if you enable TiDB Binlog in a test environment on the internal network, you can skip this step.

By default, TiDB's affinity is set to `{}`. Currently, each TiDB instance does not have a corresponding Pump instance by default. When TiDB Binlog is enabled, if Pump and TiDB are separately deployed and network isolation occurs, and `ignore-error` is enabled, TiDB loses binlogs. In this situation, it is recommended to deploy a TiDB instance and a Pump instance on the same node using the affinity feature, and to split Pump instances on different nodes using the anti-affinity feature. For each node, only one Pump instance is required.
By default, the affinity of TiDB and Pump is set to `{}`. Currently, each TiDB instance does not have a corresponding Pump instance by default. When TiDB Binlog is enabled, if Pump and TiDB are separately deployed and network isolation occurs, and `ignore-error` is enabled in TiDB components, TiDB loses binlogs.

> Note:
>
> `<release-name>` needs to be replaced with the `Helm-release-name` of the target `tidb-cluster` chart.
In this situation, it is recommended to deploy a TiDB instance and a Pump instance on the same node using the affinity feature, and to split Pump instances on different nodes using the anti-affinity feature. For each node, only one Pump instance is required. The steps are as follows:

* Configure `tidb.affinity` as follows:

{{< copyable "" >}}
* Configure `spec.tidb.affinity` as follows:

```yaml
tidb:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
spec:
tidb:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app.kubernetes.io/component"
operator: In
values:
- "pump"
- "pump"
- key: "app.kubernetes.io/managed-by"
operator: In
values:
- "tidb-operator"
- "tidb-operator"
- key: "app.kubernetes.io/name"
operator: In
values:
- "tidb-cluster"
- "tidb-cluster"
- key: "app.kubernetes.io/instance"
operator: In
values:
- <release-name>
topologyKey: kubernetes.io/hostname
- <cluster-name>
topologyKey: kubernetes.io/hostname
```

* Configure `binlog.pump.affinity` as follows:

{{< copyable "" >}}
* Configure `spec.pump.affinity` as follows:

```yaml
binlog:
spec:
pump:
affinity:
podAffinity:
Expand All @@ -101,7 +108,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
- key: "app.kubernetes.io/instance"
operator: In
values:
- <release-name>
- <cluster-name>
topologyKey: kubernetes.io/hostname
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
Expand All @@ -124,35 +131,16 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
- key: "app.kubernetes.io/instance"
operator: In
values:
- <release-name>
- <cluster-name>
topologyKey: kubernetes.io/hostname
```
> **Note:**
>
> If you update the affinity configuration of the TiDB components, it will cause rolling updates of the TiDB components in the cluster.

3. Create a new TiDB cluster or update an existing cluster:

* Create a new TiDB cluster with TiDB Binlog enabled:

{{< copyable "shell-regular" >}}

```shell
helm install pingcap/tidb-cluster --name=<release-name> --namespace=<namespace> --version=<chart-version> -f <values-file>
```

* Update an existing TiDB cluster to enable TiDB Binlog:

> Note:
>
> If you set the affinity for TiDB and its components, updating the existing TiDB cluster causes rolling updates of the TiDB components in the cluster.

{{< copyable "shell-regular" >}}

```shell
helm upgrade <release-name> pingcap/tidb-cluster --version=<chart-version> -f <values-file>
```

## Deploy multiple drainers
## Deploy drainer

By default, only one downstream drainer is created. You can install the `tidb-drainer` Helm chart to deploy more drainers for a TiDB cluster, as described below:
To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB cluster, take the following steps:

1. Make sure that the PingCAP Helm repository is up to date:

Expand All @@ -170,6 +158,8 @@ By default, only one downstream drainer is created. You can install the `tidb-dr

2. Get the default `values.yaml` file to facilitate customization:

{{< copyable "shell-regular" >}}

```shell
helm inspect values pingcap/tidb-drainer --version=<chart-version> > values.yaml
```
Expand Down Expand Up @@ -206,9 +196,13 @@ By default, only one downstream drainer is created. You can install the `tidb-dr
{{< copyable "shell-regular" >}}

```shell
helm install pingcap/tidb-drainer --name=<release-name> --namespace=<namespace> --version=<chart-version> -f values.yaml
helm install pingcap/tidb-drainer --name=<cluster-name> --namespace=<namespace> --version=<chart-version> -f values.yaml
```

> **Note:**
>
> This chart must be installed to the same namespace as the source TiDB cluster.

## Enable TLS

If you want to enable TLS for the TiDB cluster and TiDB Binlog, refer to [Enable TLS between Components](enable-tls-between-components.md).
2 changes: 1 addition & 1 deletion zh/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
- [销毁 TiDB 集群](destroy-a-tidb-cluster.md)
- [重启 TiDB 集群](restart-a-tidb-cluster.md)
- [维护 TiDB 集群所在节点](maintain-a-kubernetes-node.md)
- [收集日志](collect-tidb-binlogs.md)
- [收集日志](collect-tidb-logs.md)
- [集群故障自动转移](use-auto-failover.md)
- [开启 TiDB Operator 准入控制器](enable-admission-webhook.md)
+ TiDB 集群伸缩
Expand Down
2 changes: 1 addition & 1 deletion zh/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ TiDB Operator 提供了多种方式来部署 Kubernetes 上的 TiDB 集群:
+ [TiDB 集群备份恢复](backup-and-restore-using-helm-charts.md)
+ [配置 TiDB 集群故障自动转移](use-auto-failover.md)
+ [监控 TiDB 集群](monitor-a-tidb-cluster.md)
+ [TiDB 集群日志收集](collect-tidb-binlogs.md)
+ [TiDB 集群日志收集](collect-tidb-logs.md)
+ [维护 TiDB 所在的 Kubernetes 节点](maintain-a-kubernetes-node.md)

当集群出现问题需要进行诊断时,你可以:
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion zh/tidb-operator-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ TiDB Operator 提供了多种方式来部署 Kubernetes 上的 TiDB 集群:
+ [TiDB 集群备份恢复](restore-from-aws-s3-using-br.md)
+ [配置 TiDB 集群故障自动转移](use-auto-failover.md)
+ [监控 TiDB 集群](monitor-a-tidb-cluster.md)
+ [TiDB 集群日志收集](collect-tidb-binlogs.md)
+ [TiDB 集群日志收集](collect-tidb-logs.md)
+ [维护 TiDB 所在的 Kubernetes 节点](maintain-a-kubernetes-node.md)

当集群出现问题需要进行诊断时,你可以:
Expand Down

0 comments on commit b659a0a

Please sign in to comment.