Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/add-node.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Add nodes to cluster

New nodes that are [properly configured](configure-nodes.md#configure) are provisioned
New nodes that are [properly configured](configure-nodes.md#configure-nodes-for-write-set-replication) are provisioned
automatically. When you start a node with the address of at least one other
running node in the [`wsrep_cluster_address`](wsrep-system-index.md#wsrep_cluster_address) variable, this node automatically joins and synchronizes with the cluster.

Expand All @@ -11,7 +11,7 @@ running node in the [`wsrep_cluster_address`](wsrep-system-index.md#wsrep_cluste
Do not join several nodes at the same time
to avoid overhead due to large amounts of traffic when a new node joins.

Percona XtraDB Cluster uses [Percona XtraBackup](https://www.percona.com/software/mysql-database/percona-xtrabackup) for [State Snapshot Transfer](glossary.md#sst) and the `wsrep_sst_method` variable is always set to `xtrabackup-v2`.
Percona XtraDB Cluster uses [Percona XtraBackup](https://www.percona.com/software/mysql-database/percona-xtrabackup) for [State Snapshot Transfer](glossary.md#state-snapshot-transfer-sst) and the `wsrep_sst_method` variable is always set to `xtrabackup-v2`.

## Start the second node

Expand All @@ -21,7 +21,7 @@ Start the second node using the following command:
[root@pxc2 ~]# systemctl start mysql
```

After the server starts, it receives [SST](glossary.md#sst) automatically.
After the server starts, it receives [SST](glossary.md#state-snapshot-transfer-sst) automatically.

To check the status of the second node, run the following:

Expand Down Expand Up @@ -59,7 +59,7 @@ added to the cluster. The cluster size is now 2 nodes, it is the primary
component, and it is fully connected and ready to receive write-set replication.

If the state of the second node is `Synced` as in the previous example, then
the node received full [SST](glossary.md#sst) is synchronized with the cluster, and you can
the node received full [SST](glossary.md#state-snapshot-transfer-sst) is synchronized with the cluster, and you can
proceed to add the next node.

!!! note
Expand Down Expand Up @@ -106,5 +106,5 @@ fully connected and ready to receive write-set replication.

## Next steps

When you add all nodes to the cluster, you can [verify replication](verify-replication.md#verify) by running queries and manipulating data on nodes to see if these changes are synchronized across the cluster.
When you add all nodes to the cluster, you can [verify replication](verify-replication.md#verify-replication) by running queries and manipulating data on nodes to see if these changes are synchronized across the cluster.

4 changes: 2 additions & 2 deletions docs/apt.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ We gather [Telemetry data] in the Percona packages and Docker images.

!!! admonition "See also"

For more information, see [Enabling AppArmor](apparmor.md#apparmor).
For more information, see [Enabling AppArmor](apparmor.md#enable-apparmor).

## Install from Repository

Expand Down Expand Up @@ -102,6 +102,6 @@ During the installation, you are requested to provide a password for the `root`
## Next steps

After you install Percona XtraDB Cluster and stop the `mysql` service,
configure the node according to the procedure described in [Configuring Nodes for Write-Set Replication](configure-nodes.md#configure).
configure the node according to the procedure described in [Configuring Nodes for Write-Set Replication](configure-nodes.md#configure-nodes-for-write-set-replication).

[Telemetry data]: telemetry.md
6 changes: 3 additions & 3 deletions docs/bootstrap.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Bootstrap the first node

After you [configure all PXC nodes](configure-nodes.md#configure), initialize the cluster by
After you [configure all PXC nodes](configure-nodes.md#configure-nodes-for-write-set-replication), initialize the cluster by
bootstrapping the first node. The initial node must contain all the data that
you want to be replicated to other nodes.

Expand All @@ -18,7 +18,7 @@ When you start the node using the previous command,
it runs in bootstrap mode with `wsrep_cluster_address=gcomm://`.
This tells the node to initialize the cluster
with `wsrep_cluster_conf_id` variable set to `1`.
After you [add other nodes](add-node.md#add-node) to the cluster,
After you [add other nodes](add-node.md#add-nodes-to-cluster) to the cluster,
you can then restart this node as normal,
and it will use standard configuration again.

Expand Down Expand Up @@ -58,4 +58,4 @@ it is fully connected and ready for write-set replication.

## Next steps

After initializing the cluster, you can [add other nodes](add-node.md#add-node).
After initializing the cluster, you can [add other nodes](add-node.md#add-nodes-to-cluster).
4 changes: 2 additions & 2 deletions docs/clone-sst.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ wsrep_sst_allowed_methods = xtrabackup-v2,clone

### Joiner

On the Joiner server, set the [`wsrep_sst_method`]((wsrep-system-index.md#wsrep_sst_method)) variable to `clone` in the configuration file (`my.cnf`). This setting is the only accepted value for the Clone SST process.
On the Joiner server, set the [`wsrep_sst_method`](wsrep-system-index.md#wsrep_sst_method) variable to `clone` in the configuration file (`my.cnf`). This setting is the only accepted value for the Clone SST process.

```ini
[mysqld]
Expand Down Expand Up @@ -119,7 +119,7 @@ State Snapshot Transfer (SST) in Galera Cluster relies on specific variables tha

| Variable | Description | Link |
|---------------------------------|---------------------------------------------------------------------------------------------------------------|-------------------------------------------|
| `sst_idle_timeout` | Sets the maximum time (in seconds) the SST process can remain idle before being considered failed. You must define this variable in the `[sst]` section of the `my.cnf` file. | [Learn more](wsrep-system-index.md#sst_idle_timeout) |
| `sst_idle_timeout` | Sets the maximum time (in seconds) the SST process can remain idle before being considered failed. You must define this variable in the `[sst]` section of the `my.cnf` file. | |
| `wsrep_sst_donor` | Defines the preferred donor node for SST. If not specified, the cluster automatically selects a donor. | [Learn more](wsrep-system-index.md#wsrep_sst_donor) |
| `wsrep_sst_method` | Specifies the method or script used for the State Snapshot Transfer (SST) process. Only one value can be selected. | [Learn more](wsrep-system-index.md#wsrep_sst_method) |
| `wsrep_sst_receive_address` | Specifies the IP address and port on the Joiner node to receive SST data. | [Learn more](wsrep-system-index.md#wsrep_sst_receive_address) |
Expand Down
6 changes: 3 additions & 3 deletions docs/configure-cluster-rhel.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,16 +34,16 @@ to ports 3306, 4444, 4567 and 4568.

!!! admonition "Different from previous versions"

The variable `wsrep_sst_auth` has been removed. Percona XtraDB Cluster 8.0 automatically creates the system user [`mysql.pxc.internal.session`](glossary.md#mysqlpxcinternalsession). During [SST](glossary.md#sst), the user `mysql.pxc.sst.user` and the role [`mysql.pxc.sst.role`](glossary.md#mysqlpxcsstrole) are created on the donor node.
The variable `wsrep_sst_auth` has been removed. Percona XtraDB Cluster 8.0 automatically creates the system user [`mysql.pxc.internal.session`](glossary.md#mysqlpxcinternalsession). During [SST](glossary.md#state-snapshot-transfer-sst), the user `mysql.pxc.sst.user` and the role [`mysql.pxc.sst.role`](glossary.md#mysqlpxcsstrole) are created on the donor node.

## Step 1. Installing PXC

Install Percona XtraDB Cluster on all three nodes as described in [Installing Percona XtraDB Cluster on Red Hat Enterprise Linux or CentOS](yum.md#yum).
Install Percona XtraDB Cluster on all three nodes as described in [Installing Percona XtraDB Cluster on Red Hat Enterprise Linux or CentOS](yum.md).

## Step 2. Configuring the first node

Individual nodes should be configured to be able to bootstrap the cluster.
For more information about bootstrapping the cluster, see [Bootstrapping the First Node](bootstrap.md#bootstrap).
For more information about bootstrapping the cluster, see [Bootstrapping the First Node](bootstrap.md#bootstrap-the-first-node).

1. Make sure that the configuration file `/etc/my.cnf`
on the first node (`percona1`) contains the following:
Expand Down
4 changes: 2 additions & 2 deletions docs/configure-cluster-ubuntu.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ to ports 3306, 4444, 4567 and 4568.

## Step 1. Install PXC

Install Percona XtraDB Cluster on all three nodes as described in [Installing Percona XtraDB Cluster on Debian or Ubuntu](apt.md#apt).
Install Percona XtraDB Cluster on all three nodes as described in [Installing Percona XtraDB Cluster on Debian or Ubuntu](apt.md).

!!! note

Expand All @@ -47,7 +47,7 @@ Install Percona XtraDB Cluster on all three nodes as described in [Installing Pe
## Step 2. Configure the first node

Individual nodes should be configured to be able to bootstrap the cluster.
For more information about bootstrapping the cluster, see [Bootstrapping the First Node](bootstrap.md#bootstrap).
For more information about bootstrapping the cluster, see [Bootstrapping the First Node](bootstrap.md#bootstrap-the-first-node).

1. Make sure that the configuration file `/etc/mysql/my.cnf`
for the first node (`pxc1`) contains the following:
Expand Down
16 changes: 8 additions & 8 deletions docs/configure-nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,15 +74,15 @@ In this section, we will demonstrate how to configure a three node cluster:

The replication traffic encryption cannot be enabled on a running cluster. If
it was disabled before the cluster was bootstrapped, the cluster must to
stopped. Then set up the encryption, and bootstrap (see [`Bootstrapping the First Node`](bootstrap.md#bootstrap))
stopped. Then set up the encryption, and bootstrap (see [`Bootstrapping the First Node`](bootstrap.md#bootstrap-the-first-node))
again.

!!! admonition "See also"

More information about the security settings in Percona XtraDB Cluster
* [`Security Basics`](security-index.md#security)
* [`Encrypting PXC Traffic`](encrypt-traffic.md#encrypt-traffic)
* [`SSL Automatic Configuration`](encrypt-traffic.md#ssl-auto-conf)
* [`Security basics`](security-index.md#security-basics)
* [`Encrypting PXC traffic`](encrypt-traffic.md#encrypt-pxc-traffic)
* [`SSL automatic configuration`](encrypt-traffic.md#ssl-automatic-configuration)


## Template of the configuration file
Expand Down Expand Up @@ -132,7 +132,7 @@ wsrep_sst_method=xtrabackup-v2
## Next Steps: Bootstrap the first node

After you configure all your nodes, initialize Percona XtraDB Cluster by bootstrapping the first
node according to the procedure described in [Bootstrapping the First Node](bootstrap.md#bootstrap).
node according to the procedure described in [Bootstrapping the First Node](bootstrap.md#bootstrap-the-first-node).

## Essential configuration variables

Expand Down Expand Up @@ -161,7 +161,7 @@ the joining node can use other addresses.

No addresses are required for the initial node in the cluster.
However, it is recommended to specify them
and [properly bootstrap the first node](bootstrap.md#bootstrap).
and [properly bootstrap the first node](bootstrap.md#bootstrap-the-first-node).
This will ensure that the node is able to rejoin the cluster if it goes down in the future.

[`wsrep_node_name`](wsrep-system-index.md#wsrep_node_name)
Expand All @@ -175,12 +175,12 @@ Specify the IP address of this particular node.

[`wsrep_sst_method`](wsrep-system-index.md#wsrep_sst_method)

By default, Percona XtraDB Cluster uses Percona [XtraBackup](https://www.percona.com/software/mysql-database/percona-xtrabackup) for [State Snapshot Transfer](glossary.md#sst). `xtrabackup-v2` is the only supported option for this variable.
By default, Percona XtraDB Cluster uses Percona [XtraBackup](https://www.percona.com/software/mysql-database/percona-xtrabackup) for [State Snapshot Transfer](glossary.md#state-snapshot-transfer-sst). `xtrabackup-v2` is the only supported option for this variable.
This method requires a user for SST to be set up on the initial node.

[`pxc_strict_mode`](wsrep-system-index.md#pxc_strict_mode)

[PXC Strict Mode](strict-mode.md#pxc-strict-mode) is enabled by default and set to `ENFORCING`, which blocks the use of tech preview features and unsupported features in Percona XtraDB Cluster.
[PXC Strict Mode](strict-mode.md#percona-xtradb-cluster-strict-mode) is enabled by default and set to `ENFORCING`, which blocks the use of tech preview features and unsupported features in Percona XtraDB Cluster.

[`binlog_format`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_format)

Expand Down
16 changes: 8 additions & 8 deletions docs/crash-recovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ and the cluster size is reduced; some properties like quorum calculation or auto
increment are automatically changed. As soon as node A is started again, it
joins the cluster based on its [`wsrep_cluster_address`](wsrep-system-index.md#wsrep_cluster_address) variable in `my.cnf`.

If the writeset cache ([`gcache.size`](wsrep-provider-index.md#gcache.size)) on nodes B and/or C still
If the writeset cache ([`gcache.size`](wsrep-provider-index.md#gcachesize)) on nodes B and/or C still
has all the transactions executed while node A was down, joining is possible via
[IST](glossary.md#ist). If [IST](glossary.md#ist) is impossible due to missing transactions in donor’s
gcache, the fallback decision is made by the donor and [SST](glossary.md#sst) is started
gcache, the fallback decision is made by the donor and [SST](glossary.md#state-snapshot-transfer-sst) is started
automatically.

## Scenario 2: Two nodes are gracefully stopped
Expand Down Expand Up @@ -62,7 +62,7 @@ is important that a PXC node writes its last executed position to the
By comparing the seqno number in this file, you can see which is the most
advanced node (most likely the last stopped). The cluster must be bootstrapped
using this node, otherwise nodes that had a more advanced position will have to
perform the full [SST](glossary.md#sst) to join the cluster initialized from the less
perform the full [SST](glossary.md#state-snapshot-transfer-sst) to join the cluster initialized from the less
advanced one. As a result, some transactions will be lost). To bootstrap the
first node, invoke the startup script like this:

Expand All @@ -73,11 +73,11 @@ $ systemctl start mysql@bootstrap.service
!!! note

Even though you bootstrap from the most advanced node, the other
nodes have a lower sequence number. They will still have to join via the full [SST](glossary.md#sst)
nodes have a lower sequence number. They will still have to join via the full [SST](glossary.md#state-snapshot-transfer-sst)
because the *Galera Cache* is not retained on restart.

For this reason, it is recommended to stop writes to the cluster *before* its
full shutdown, so that all nodes can stop at the same position. See also [`pc.recovery`](wsrep-provider-index.md#pc.recovery).
full shutdown, so that all nodes can stop at the same position. See also [`pc.recovery`](wsrep-provider-index.md#pcrecovery).

## Scenario 4: One node disappears from the cluster

Expand Down Expand Up @@ -128,7 +128,7 @@ Otherwise, you end up with two clusters having different data.

!!! admonition "See also"

[Adding Nodes to Cluster](add-node.md#add-node)
[Adding Nodes to Cluster](add-node.md#add-nodes-to-cluster)

## Scenario 6: All nodes went down without a proper shutdown procedure

Expand Down Expand Up @@ -186,7 +186,7 @@ safe_to_bootstrap: 1
...
```

In recent Galera versions, the option [`pc.recovery`](wsrep-provider-index.md#pc.recovery) (enabled by default) saves the cluster state into a file named `gvwstate.dat` on each member node. As the name of this option suggests (pc – primary component), it
In recent Galera versions, the option [`pc.recovery`](wsrep-provider-index.md#pcrecovery) (enabled by default) saves the cluster state into a file named `gvwstate.dat` on each member node. As the name of this option suggests (pc – primary component), it
saves only a cluster being in the PRIMARY state. An example content of the file
may look like this:

Expand Down Expand Up @@ -249,7 +249,7 @@ and the other half should be able to automatically re-join using [IST](glossary.
Then, as the Galera replication model truly cares about data consistency:
once the inconsistency is detected, nodes that cannot execute row change
statement due to a data difference – an emergency shutdown will be performed and the only
way to bring the nodes back to the cluster is via the full [SST](glossary.md#term-SST)
way to bring the nodes back to the cluster is via the full [SST](glossary.md#state-snapshot-transfer-sst)

**Based on material from Percona Database Performance Blog**

Expand Down
Loading