Skip to content

Commit 5673f82

Browse files
committed
Split setup into individual components pages
1 parent 8f3a3e8 commit 5673f82

9 files changed

+662
-8
lines changed

docs/solutions/ha-etcd.md

+142
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
# Configure etcd distributed store
2+
3+
The distributed configuration store provides a reliable way to store data that needs to be accessed by large scale distributed systems. The most popular implementation of the distributed configuration store is etcd. etcd is deployed as a cluster for fault-tolerance and requires an odd number of members (n/2+1) to agree on updates to the cluster state. An etcd cluster helps establish a consensus among nodes during a failover and manages the configuration for the three PostgreSQL instances.
4+
5+
This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/)
6+
7+
If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd).
8+
9+
The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command.
10+
11+
!!! note
12+
13+
Users with deeper understanding of how etcd works can configure and start all etcd nodes at a time and bootstrap the cluster using one of the following methods:
14+
15+
* Static in the case when the IP addresses of the cluster nodes are known
16+
* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time.
17+
18+
See the [How to configure etcd nodes simultaneously](../how-to.md#how-to-configure-etcd-nodes-simultaneously) section for details.
19+
20+
### Configure `node1`
21+
22+
1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node name and IP address with the actual name and IP address of your node.
23+
24+
```yaml title="/etc/etcd/etcd.conf.yaml"
25+
name: 'node1'
26+
initial-cluster-token: PostgreSQL_HA_Cluster_1
27+
initial-cluster-state: new
28+
initial-cluster: node1=http://10.104.0.1:2380
29+
data-dir: /var/lib/etcd
30+
initial-advertise-peer-urls: http://10.104.0.1:2380
31+
listen-peer-urls: http://10.104.0.1:2380
32+
advertise-client-urls: http://10.104.0.1:2379
33+
listen-client-urls: http://10.104.0.1:2379
34+
```
35+
36+
2. Start the `etcd` service to apply the changes on `node1`.
37+
38+
```{.bash data-prompt="$"}
39+
$ sudo systemctl enable --now etcd
40+
$ sudo systemctl start etcd
41+
$ sudo systemctl status etcd
42+
```
43+
44+
3. Check the etcd cluster members on `node1`:
45+
46+
```{.bash data-prompt="$"}
47+
$ sudo etcdctl member list --write-out=table --endpoints=http://10.104.0.1:2379
48+
```
49+
50+
Sample output:
51+
52+
```{.text .no-copy}
53+
+------------------+---------+-------+----------------------------+----------------------------+------------+
54+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
55+
+------------------+---------+-------+----------------------------+----------------------------+------------+
56+
| 9d2e318af9306c67 | started | node1 | http://10.104.0.1:2380 | http://10.104.0.1:2379 | false |
57+
+------------------+---------+-------+----------------------------+----------------------------+------------+
58+
```
59+
60+
4. Add the `node2` to the cluster. Run the following command on `node1`:
61+
62+
```{.bash data-prompt="$"}
63+
$ sudo etcdctl member add node2 --peer-ulrs=http://10.104.0.2:2380
64+
```
65+
66+
??? example "Sample output"
67+
68+
```{.text .no-copy}
69+
Added member named node2 with ID 10042578c504d052 to cluster
70+
71+
etcd_NAME="node2"
72+
etcd_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380"
73+
etcd_INITIAL_CLUSTER_STATE="existing"
74+
```
75+
76+
### Configure `node2`
77+
78+
1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes.
79+
80+
```yaml title="/etc/etcd/etcd.conf.yaml"
81+
name: 'node2'
82+
initial-cluster-token: PostgreSQL_HA_Cluster_1
83+
initial-cluster-state: existing
84+
initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380
85+
data-dir: /var/lib/etcd
86+
initial-advertise-peer-urls: http://10.104.0.2:2380
87+
listen-peer-urls: http://10.104.0.2:2380
88+
advertise-client-urls: http://10.104.0.2:2379
89+
listen-client-urls: http://10.104.0.2:2379
90+
```
91+
92+
3. Start the `etcd` service to apply the changes on `node2`:
93+
94+
```{.bash data-prompt="$"}
95+
$ sudo systemctl enable --now etcd
96+
$ sudo systemctl start etcd
97+
$ sudo systemctl status etcd
98+
```
99+
100+
### Configure `node3`
101+
102+
1. Add `node3` to the cluster. **Run the following command on `node1`**
103+
104+
```{.bash data-prompt="$"}
105+
$ sudo etcdctl member add node3 http://10.104.0.3:2380
106+
```
107+
108+
2. On `node3`, create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes.
109+
110+
```yaml title="/etc/etcd/etcd.conf.yaml"
111+
name: 'node3'
112+
initial-cluster-token: PostgreSQL_HA_Cluster_1
113+
initial-cluster-state: existing
114+
initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380
115+
data-dir: /var/lib/etcd
116+
initial-advertise-peer-urls: http://10.104.0.3:2380
117+
listen-peer-urls: http://10.104.0.3:2380
118+
advertise-client-urls: http://10.104.0.3:2379
119+
listen-client-urls: http://10.104.0.3:2379
120+
```
121+
122+
3. Start the `etcd` service to apply the changes.
123+
124+
```{.bash data-prompt="$"}
125+
$ sudo systemctl enable --now etcd
126+
$ sudo systemctl start etcd
127+
$ sudo systemctl status etcd
128+
```
129+
130+
4. Check the etcd cluster members.
131+
132+
```{.bash data-prompt="$"}
133+
$ sudo etcdctl member list
134+
```
135+
136+
??? example "Sample output"
137+
138+
```
139+
2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false
140+
8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false
141+
c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true
142+
```

docs/solutions/ha-init-setup.md

+76
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Initial setup for high availability
2+
3+
This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni. This guide relies on the provided [architecture](ha-architecture.md) for high-availability.
4+
5+
## Preconditions
6+
7+
1. This is an example deployment where etcd runs on the same host machines as the Patroni and PostgreSQL and there is a single dedicated HAProxy host. Alternatively etcd can run on different set of nodes.
8+
9+
If etcd is deployed on the same host machine as Patroni and PostgreSQL, separate disk system for etcd and PostgreSQL is recommended due to performance reasons.
10+
11+
2. For this setup, we will use the nodes that have the following IP addresses:
12+
13+
14+
| Node name | Public IP address | Internal IP address
15+
|---------------|-------------------|--------------------
16+
| node1 | 157.230.42.174 | 10.104.0.7
17+
| node2 | 68.183.177.183 | 10.104.0.2
18+
| node3 | 165.22.62.167 | 10.104.0.8
19+
| HAProxy-demo | 134.209.111.138 | 10.104.0.6
20+
21+
22+
!!! note
23+
24+
We recommend not to expose the hosts/nodes where Patroni / etcd / PostgreSQL are running to public networks due to security risks. Use Firewalls, Virtual networks, subnets or the like to protect the database hosts from any kind of attack.
25+
26+
## Initial setup
27+
28+
It’s not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other’s names and allow their seamless communication.
29+
30+
1. Set the hostname for nodes. Run the following command on each node. Change the node name to `node1`, `node2` and `node3` respectively:
31+
32+
```{.bash data-prompt="$"}
33+
$ sudo hostnamectl set-hostname node1
34+
```
35+
36+
2. Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes:
37+
38+
=== "node1"
39+
40+
```text hl_lines="3 4"
41+
# Cluster IP and names
42+
10.104.0.1 node1
43+
10.104.0.2 node2
44+
10.104.0.3 node3
45+
```
46+
47+
=== "node2"
48+
49+
```text hl_lines="2 4"
50+
# Cluster IP and names
51+
10.104.0.1 node1
52+
10.104.0.2 node2
53+
10.104.0.3 node3
54+
```
55+
56+
=== "node3"
57+
58+
```text hl_lines="2 3"
59+
# Cluster IP and names
60+
10.104.0.1 node1
61+
10.104.0.2 node2
62+
10.104.0.3 node3
63+
```
64+
65+
=== "HAproxy-demo"
66+
67+
The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file:
68+
69+
```text hl_lines="4 5 6"
70+
# Cluster IP and names
71+
10.104.0.6 HAProxy-demo
72+
10.104.0.1 node1
73+
10.104.0.2 node2
74+
10.104.0.3 node3
75+
```
76+

docs/solutions/ha-install-software.md

+106
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# Install the software
2+
3+
## Install Percona Distribution for PostgreSQL
4+
5+
Run the following commands as root or with `sudo` privileges.
6+
7+
=== "On Debian / Ubuntu"
8+
9+
1. Disable the upstream `postgresql-{{pgversion}}` package.
10+
11+
2. Install the `percona-release` repository management tool
12+
13+
--8<-- "percona-release-apt.md"
14+
15+
16+
3. Enable the repository
17+
18+
```{.bash data-prompt="$"}
19+
$ sudo percona-release setup ppg{{pgversion}}
20+
```
21+
22+
4. Install Percona Distribution for PostgreSQL package
23+
24+
```{.bash data-prompt="$"}
25+
$ sudo apt install percona-postgresql-{{pgversion}}
26+
```
27+
28+
=== "On RHEL and derivatives"
29+
30+
1. Check the [platform specific notes](../yum.md#for-percona-distribution-for-postgresql-packages)
31+
32+
2. Install the `percona-release` repository management tool
33+
34+
--8<-- "percona-release-yum.md"
35+
36+
3. 3. Enable the repository
37+
38+
```{.bash data-prompt="$"}
39+
$ sudo percona-release setup ppg{{pgversion}}
40+
```
41+
42+
4. Install Percona Distribution for PostgreSQL package
43+
44+
```{.bash data-prompt="$"}
45+
$ sudo apt install percona-postgresql{{pgversion}}-server
46+
```
47+
48+
!!! important
49+
50+
**Don't** initialize the cluster and start the `postgresql` service. The cluster initialization and setup are handled by Patroni during the bootsrapping stage.
51+
52+
## Install Patroni, etcd, pgBackRest
53+
54+
=== "On Debian / Ubuntu"
55+
56+
1. Install some Python and auxiliary packages to help with Patroni and etcd
57+
58+
```{.bash data-prompt="$"}
59+
$ sudo apt install python3-pip python3-dev binutils
60+
```
61+
62+
2. Install etcd, Patroni, pgBackRest packages:
63+
64+
```{.bash data-prompt="$"}
65+
$ sudo apt install percona-patroni \
66+
etcd etcd-server etcd-client \
67+
percona-pgbackrest
68+
```
69+
70+
3. Stop and disable all installed services:
71+
72+
```{.bash data-prompt="$"}
73+
$ sudo systemctl stop {etcd,patroni,postgresql}
74+
$ sudo systemctl disable {etcd,patroni,postgresql}
75+
```
76+
77+
4. Even though Patroni can use an existing Postgres installation, remove the data directory to force it to initialize a new Postgres cluster instance.
78+
79+
```{.bash data-prompt="$"}
80+
$ sudo systemctl stop postgresql
81+
$ sudo rm -rf /var/lib/postgresql/{{pgversion}}/main
82+
```
83+
84+
=== "On RHEL and derivatives"
85+
86+
1. Install some Python and auxiliary packages to help with Patroni and etcd
87+
88+
```{.bash data-prompt="$"}
89+
$ sudo yum install python3-pip python3-devel binutils
90+
```
91+
92+
2. Install etcd, Patroni, pgBackRest packages. Check [platform specific notes for Patroni](../yum.md#for-percona-patroni-package):
93+
94+
```{.bash data-prompt="$"}
95+
$ sudo yum install percona-patroni \
96+
etcd python3-python-etcd\
97+
percona-pgbackrest
98+
```
99+
100+
3. Stop and disable all installed services:
101+
102+
```{.bash data-prompt="$"}
103+
$ sudo systemctl stop {etcd,patroni,postgresql}
104+
$ systemctl disable {etcd,patroni,postgresql}
105+
```
106+

0 commit comments

Comments
 (0)