Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions br/br-checkpoint-restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Cross-major-version checkpoint recovery is not recommended. For clusters where `

> **Note:**
>
> Starting from v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter.
> Starting from v8.5.5 and v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter.

Checkpoint restore operations are divided into two parts: snapshot restore and PITR restore.

Expand All @@ -93,13 +93,13 @@ Note that before entering the log restore phase during the initial restore, `br`

> **Note:**
>
> To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5 and v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.

## Implementation details: store checkpoint data in the external storage

> **Note:**
>
> Starting from v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example:
> Starting from v8.5.5 and v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example:
>
> ```shell
> ./br restore full -s "s3://backup-bucket/backup-prefix" --checkpoint-storage "s3://temp-bucket/checkpoints"
Expand Down Expand Up @@ -159,4 +159,4 @@ Note that before entering the log restore phase during the initial restore, `br`

> **Note:**
>
> To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5 and v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
2 changes: 1 addition & 1 deletion br/br-compact-log-backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Traditional log backups store write operations in a highly unstructured manner,
- **Write amplification**: all writes must be compacted from L0 to the bottommost level by level.
- **Dependency on full backups**: frequent full backups are required to control the amount of recovery data, which can impact application operations.

Starting from v9.0.0, the compact log backup feature provides offline compaction capabilities, converting unstructured log backup data into structured SST files. This results in the following improvements:
Starting from v8.5.5 and v9.0.0, the compact log backup feature provides offline compaction capabilities, converting unstructured log backup data into structured SST files. This results in the following improvements:

- SST files can be quickly imported into the cluster, **improving recovery performance**.
- Redundant data is removed during compaction, **reducing storage space consumption**.
Expand Down
10 changes: 5 additions & 5 deletions br/br-pitr-manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -505,7 +505,7 @@ tiup br restore point --pd="${PD_IP}:2379"

### Restore data using filters

Starting from TiDB v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.
Starting from TiDB v8.5.5 and v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.

The filter patterns follow the same [table filtering syntax](/table-filter.md) as other BR operations:

Expand Down Expand Up @@ -557,7 +557,7 @@ tiup br restore point --pd="${PD_IP}:2379" \

### Concurrent restore operations

Starting from TiDB v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.
Starting from TiDB v8.5.5 and v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.

Usage example for concurrent restores:

Expand Down Expand Up @@ -586,7 +586,7 @@ tiup br restore point --pd="${PD_IP}:2379" \

### Compatibility between ongoing log backup and snapshot restore

Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
Starting from v8.5.5 and v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):

- The node performing backup and restore operations has the following necessary permissions:
- Read access to the external storage containing the backup source, for snapshot restore
Expand All @@ -604,11 +604,11 @@ If any of the above conditions are not met, you can restore the data by followin

> **Note:**
>
> When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v9.0.0 or later. Otherwise, restoring the recorded full restore data might fail.
> When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v8.5.5 or later. Otherwise, restoring the recorded full restore data might fail.

### Compatibility between ongoing log backup and PITR operations

Starting from TiDB v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.
Starting from TiDB v8.5.5 and v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.

#### Important limitation for PITR with ongoing log backup

Expand Down
2 changes: 1 addition & 1 deletion br/br-snapshot-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ When you perform a snapshot backup, BR backs up system tables as tables with the
- Starting from BR v5.1.0, when you back up snapshots, BR automatically backs up the **system tables** in the `mysql` schema, but does not restore these system tables by default.
- Starting from v6.2.0, BR lets you specify `--with-sys-table` to restore **data in some system tables**.
- Starting from v7.6.0, BR enables `--with-sys-table` by default, which means that BR restores **data in some system tables** by default.
- Starting from v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore system tables physically. This approach uses the `RENAME TABLE` DDL statement to atomically swap the system tables in the `__TiDB_BR_Temporary_mysql` database with the system tables in the `mysql` database. Unlike the logical restoration of system tables using the `REPLACE INTO` SQL statement, physical restoration completely overwrites the existing data in the system tables.
- Starting from v8.5.5 and v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore system tables physically. This approach uses the `RENAME TABLE` DDL statement to atomically swap the system tables in the `__TiDB_BR_Temporary_mysql` database with the system tables in the `mysql` database. Unlike the logical restoration of system tables using the `REPLACE INTO` SQL statement, physical restoration completely overwrites the existing data in the system tables.

**BR can restore data in the following system tables:**

Expand Down
6 changes: 3 additions & 3 deletions br/br-snapshot-manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,11 +129,11 @@ tiup br restore full \

> **Note:**
>
> Starting from v9.0.0, when the `--load-stats` parameter is set to `false`, BR no longer writes statistics for the restored tables to the `mysql.stats_meta` table. After the restore is complete, you can manually execute the [`ANALYZE TABLE`](/sql-statements/sql-statement-analyze-table.md) SQL statement to update the relevant statistics.
> Starting from v8.5.5 and v9.0.0, when the `--load-stats` parameter is set to `false`, BR no longer writes statistics for the restored tables to the `mysql.stats_meta` table. After the restore is complete, you can manually execute the [`ANALYZE TABLE`](/sql-statements/sql-statement-analyze-table.md) SQL statement to update the relevant statistics.

When the backup and restore feature backs up data, it stores statistics in JSON format within the `backupmeta` file. When restoring data, it loads statistics in JSON format into the cluster. For more information, see [LOAD STATS](/sql-statements/sql-statement-load-stats.md).

Starting from 9.0.0, BR introduces the `--fast-load-sys-tables` parameter, which is enabled by default. When restoring data to a new cluster using the `br` command-line tool, and the IDs of tables and partitions between the upstream and downstream clusters can be reused (otherwise, BR will automatically fall back to logically load statistics), enabling `--fast-load-sys-tables` lets BR to first restore the statistics-related system tables to the temporary system database `__TiDB_BR_Temporary_mysql`, and then atomically swap these tables with the corresponding tables in the `mysql` database using the `RENAME TABLE` statement.
Starting from v8.5.5 and v9.0.0, BR introduces the `--fast-load-sys-tables` parameter, which is enabled by default. When restoring data to a new cluster using the `br` command-line tool, and the IDs of tables and partitions between the upstream and downstream clusters can be reused (otherwise, BR will automatically fall back to logically load statistics), enabling `--fast-load-sys-tables` lets BR to first restore the statistics-related system tables to the temporary system database `__TiDB_BR_Temporary_mysql`, and then atomically swap these tables with the corresponding tables in the `mysql` database using the `RENAME TABLE` statement.

The following is an example:

Expand Down Expand Up @@ -194,7 +194,7 @@ Download&Ingest SST <-----------------------------------------------------------
Restore Pipeline <-------------------------/...............................................> 17.12%
```

Starting from TiDB v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore statistics physically in a new cluster:
Starting from TiDB v8.5.5 and v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore statistics physically in a new cluster:

```shell
tiup br restore full \
Expand Down
2 changes: 1 addition & 1 deletion configure-store-limit.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ tiup ctl:v<CLUSTER_VERSION> pd store limit all 5 add-peer // All stores
tiup ctl:v<CLUSTER_VERSION> pd store limit all 5 remove-peer // All stores can at most delete 5 peers per minute.
```

Starting from v8.5.5 and v9.0.0, you can set the speed limit for removing-peers operations for all stores of a specific storage engine type, as shown in the following examples:
Starting from v8.5.5 and v9.0.0, you can set the speed limit for removing-peer operations for all stores of a specific storage engine type, as shown in the following examples:

```bash
tiup ctl:v<CLUSTER_VERSION> pd store limit all engine tikv 5 remove-peer // All TiKV stores can at most remove 5 peers per minute.
Expand Down
6 changes: 3 additions & 3 deletions system-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -1775,7 +1775,7 @@ mysql> SELECT job_info FROM mysql.analyze_jobs ORDER BY end_time DESC LIMIT 1;
- If `tidb_ddl_enable_fast_reorg` is set to `OFF`, `ADD INDEX` is executed as a transaction. If there are many update operations such as `UPDATE` and `REPLACE` in the target columns during the `ADD INDEX` execution, a larger batch size indicates a larger probability of transaction conflicts. In this case, it is recommended that you set the batch size to a smaller value. The minimum value is 32.
- If the transaction conflict does not exist, or if `tidb_ddl_enable_fast_reorg` is set to `ON`, you can set the batch size to a large value. This makes data backfilling faster but also increases the write pressure on TiKV. For a proper batch size, you also need to refer to the value of `tidb_ddl_reorg_worker_cnt`. See [Interaction Test on Online Workloads and `ADD INDEX` Operations](https://docs.pingcap.com/tidb/dev/online-workloads-and-add-index-operations) for reference.
- Starting from v8.3.0, this parameter is supported at the SESSION level. Modifying the parameter at the GLOBAL level will not impact currently running DDL statements. It will only apply to DDLs submitted in new sessions.
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> BATCH_SIZE = <new_batch_size>;`.
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> BATCH_SIZE = <new_batch_size>;`. For more information, see [`ADMIN ALTER DDL JOBS`](/sql-statements/sql-statement-admin-alter-ddl.md).

### tidb_ddl_reorg_priority

Expand Down Expand Up @@ -1851,7 +1851,7 @@ Assume that you have a cluster with 4 TiDB nodes and multiple TiKV nodes. In thi
- Unit: Threads
- This variable is used to set the concurrency of the DDL operation in the `re-organize` phase.
- Starting from v8.3.0, this parameter is supported at the SESSION level. Modifying the parameter at the GLOBAL level will not impact currently running DDL statements. It will only apply to DDLs submitted in new sessions.
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> THREAD = <new_thread_count>;`.
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> THREAD = <new_thread_count>;`. For more information, see [`ADMIN ALTER DDL JOBS`](/sql-statements/sql-statement-admin-alter-ddl.md).

### `tidb_enable_fast_create_table` <span class="version-mark">New in v8.0.0</span>

Expand Down Expand Up @@ -6407,7 +6407,7 @@ For details, see [Identify Slow Queries](/identify-slow-queries.md).
> - `PARALLEL` and `PARALLEL-FAST` modes are incompatible with [`tidb_tso_client_batch_max_wait_time`](#tidb_tso_client_batch_max_wait_time-new-in-v530) and [`tidb_enable_tso_follower_proxy`](#tidb_enable_tso_follower_proxy-new-in-v530). If either [`tidb_tso_client_batch_max_wait_time`](#tidb_tso_client_batch_max_wait_time-new-in-v530) is set to a non-zero value or [`tidb_enable_tso_follower_proxy`](#tidb_enable_tso_follower_proxy-new-in-v530) is enabled, configuring `tidb_tso_client_rpc_mode` does not take effect, and TiDB always works in `DEFAULT` mode.
> - `PARALLEL` and `PARALLEL-FAST` modes are designed to reduce the average time for retrieving TS in TiDB. In situations with significant latency fluctuations, such as long-tail latency or latency spikes, these two modes might not provide any remarkable performance improvements.

### tidb_cb_pd_metadata_error_rate_threshold_ratio <span class="version-mark">New in v9.0.0</span>
### tidb_cb_pd_metadata_error_rate_threshold_ratio <span class="version-mark">New in v8.5.5 and v9.0.0</span>

- Scope: GLOBAL
- Persists to cluster: Yes
Expand Down
2 changes: 1 addition & 1 deletion tidb-configuration-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -640,7 +640,7 @@ Configuration items related to performance.
### `enable-async-batch-get` <span class="version-mark">New in v8.5.5 and v9.0.0</span>

+ Controls whether TiDB uses asynchronous mode to execute the Batch Get operator. Using asynchronous mode can reduce goroutine overhead and provide better performance. Generally, there is no need to modify this configuration item.
+ Default value: `true`
+ Default value: `true` for v9.0.0 and later versions. In v8.5.5, the default value is `false`.

## opentracing

Expand Down