Skip to content

Commit

Permalink
PG-955 Added etcd.service sample file (#641)
Browse files Browse the repository at this point in the history
PG-963 Updated Patroni config
  • Loading branch information
nastena1606 authored Aug 15, 2024
1 parent a8f0ae5 commit 7f744b0
Show file tree
Hide file tree
Showing 3 changed files with 57 additions and 28 deletions.
29 changes: 26 additions & 3 deletions docs/enable-extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,39 @@ While setting up a high availability PostgreSQL cluster with Patroni, you will n

- [HAProxy :octicons-link-external-16:](https://www.haproxy.org/).

If you install the software fom packages, all required dependencies and service unit files are included. If you [install the software from the tarballs](tarball.md), you must first enable `etcd`. See the steps in the [etcd](#etcd) section if this document.

See the configuration guidelines for [Debian and Ubuntu](solutions/ha-setup-apt.md) and [RHEL and CentOS](solutions/ha-setup-yum.md).

!!! important
## etcd

The following steps apply if you [installed etcd from the tarballs](tarball.md).

To configure high-availability with [the software installed from the tarballs](tarball.md), install the Python client for `etcd` to resolve dependency issues. Use the following command:
1. Install the Python client for `etcd` to resolve dependency issues. Use the following command:

```{.bash data-prompt="$"}
$ /opt/percona-python3/bin/pip3 install python-etcd
```


2. Create the `etcd.service` file. This file allows `systemd` to start, stop, restart, and manage the `etcd` service. This includes handling dependencies, monitoring the service, and ensuring it runs as expected.

```ini title="/etc/systemd/system/etcd.service"
[Unit]
After=network.target
Description=etcd - highly-available key value store
[Service]
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yaml
User=etcd
[Install]
WantedBy=multi-user.target
```



## pgBadger

Expand Down
43 changes: 23 additions & 20 deletions docs/solutions/ha-setup-apt.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,8 @@ The distributed configuration store helps establish a consensus among nodes duri

This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios :octicons-link-external-16:](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/)

If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd).

The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command.

!!! note
Expand Down Expand Up @@ -308,25 +310,26 @@ Run the following commands on all nodes. You can do this in parallel:
bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
slots:
percona_cluster_1:
type: physical
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
wal_level: replica
hot_standby: "on"
wal_keep_segments: 10
max_wal_senders: 5
max_replication_slots: 10
wal_log_hints: "on"
logging_collector: 'on'
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
wal_level: replica
hot_standby: "on"
wal_keep_segments: 10
max_wal_senders: 5
max_replication_slots: 10
wal_log_hints: "on"
logging_collector: 'on'
max_wal_size: '10GB'
archive_mode: "on"
archive_timeout: 600s
archive_command: "cp -f %p /home/postgres/archived/%f"
# some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
Expand Down Expand Up @@ -360,7 +363,7 @@ Run the following commands on all nodes. You can do this in parallel:
connect_address: ${NODE_IP}:5432
data_dir: ${DATA_DIR}
bin_dir: ${PG_BIN_DIR}
pgpass: /tmp/pgpass
pgpass: /tmp/pgpass0
authentication:
replication:
username: replicator
Expand Down
13 changes: 8 additions & 5 deletions docs/solutions/ha-setup-yum.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,9 @@ It's not necessary to have name resolution, but it makes the whole setup more re
The distributed configuration store helps establish a consensus among nodes during a failover and will manage the configuration for the three PostgreSQL instances. Although Patroni can work with other distributed consensus stores (i.e., Zookeeper, Consul, etc.), the most commonly used one is `etcd`.
This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios :octicons-link-external-16:](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/)
This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/).
If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd).
The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add` command.
Expand Down Expand Up @@ -321,9 +323,6 @@ Run the following commands on all nodes. You can do this in parallel:
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
slots:
percona_cluster_1:
type: physical
postgresql:
use_pg_rewind: true
Expand All @@ -336,6 +335,10 @@ Run the following commands on all nodes. You can do this in parallel:
max_replication_slots: 10
wal_log_hints: "on"
logging_collector: 'on'
max_wal_size: '10GB'
archive_mode: "on"
archive_timeout: 600s
archive_command: "cp -f %p /home/postgres/archived/%f"
# some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
Expand Down Expand Up @@ -367,7 +370,7 @@ Run the following commands on all nodes. You can do this in parallel:
connect_address: ${NODE_IP}:5432
data_dir: ${DATA_DIR}
bin_dir: ${PG_BIN_DIR}
pgpass: /tmp/pgpass
pgpass: /tmp/pgpass0
authentication:
replication:
username: replicator
Expand Down

0 comments on commit 7f744b0

Please sign in to comment.