Skip to content

Commit

Permalink
PG-955 Added etcd.service sample file (#638)
Browse files Browse the repository at this point in the history
PG-963 Updated Patroni config
  • Loading branch information
nastena1606 authored Aug 15, 2024
1 parent 309fbb8 commit 496053c
Show file tree
Hide file tree
Showing 3 changed files with 42 additions and 10 deletions.
31 changes: 27 additions & 4 deletions docs/enable-extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,43 @@ While setting up a high availability PostgreSQL cluster with Patroni, you will n

- Patroni installed on every ``postresql`` node.

- Distributed Configuration Store (DCS). Patroni supports such DCSs as etcd, zookeeper, Kubernetes though [etcd](https://etcd.io/) is the most popular one. It is available within Percona Distribution for PostgreSQL for all supported operating systems.
- Distributed Configuration Store (DCS). Patroni supports such DCSs as etcd, zookeeper, Kubernetes though [etcd](https://etcd.io/) is the most popular one. It is available within Percona Distribution for PostgreSQL for all supported operating systems.

- [HAProxy :octicons-link-external-16:](http://www.haproxy.org/).

If you install the software fom packages, all required dependencies and service unit files are included. If you [install the software from the tarballs](tarball.md), you must first enable `etcd`. See the steps in the [etcd](#etcd) section if this document.

See the configuration guidelines for [Debian and Ubuntu](solutions/ha-setup-apt.md) and [RHEL and CentOS](solutions/ha-setup-yum.md).

!!! important
## etcd

The following steps apply if you [installed etcd from the tarballs](tarball.md).

To configure high-availability with [the software installed from the tarballs](tarball.md), install the Python client for `etcd` to resolve dependency issues. Use the following command:
1. Install the Python client for `etcd` to resolve dependency issues. Use the following command:

```{.bash data-prompt="$"}
$ /opt/percona-python3/bin/pip3 install python-etcd
```


2. Create the `etcd.service` file. This file allows `systemd` to start, stop, restart, and manage the `etcd` service. This includes handling dependencies, monitoring the service, and ensuring it runs as expected.

```ini title="/etc/systemd/system/etcd.service"
[Unit]
After=network.target
Description=etcd - highly-available key value store
[Service]
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yaml
User=etcd
[Install]
WantedBy=multi-user.target
```



## pgBadger

Expand Down
8 changes: 7 additions & 1 deletion docs/solutions/ha-setup-apt.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,8 @@ The distributed configuration store provides a reliable way to store data that n

This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/)

If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd).

The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command.

!!! note
Expand Down Expand Up @@ -326,6 +328,10 @@ Run the following commands on all nodes. You can do this in parallel:
max_replication_slots: 10
wal_log_hints: "on"
logging_collector: 'on'
max_wal_size: '10GB'
archive_mode: "on"
archive_timeout: 600s
archive_command: "cp -f %p /home/postgres/archived/%f"
# some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
Expand Down Expand Up @@ -357,7 +363,7 @@ Run the following commands on all nodes. You can do this in parallel:
connect_address: ${NODE_IP}:5432
data_dir: ${DATA_DIR}
bin_dir: ${PG_BIN_DIR}
pgpass: /tmp/pgpass
pgpass: /tmp/pgpass0
authentication:
replication:
username: replicator
Expand Down
13 changes: 8 additions & 5 deletions docs/solutions/ha-setup-yum.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,9 @@ It's not necessary to have name resolution, but it makes the whole setup more re
The distributed configuration store provides a reliable way to store data that needs to be accessed by large scale distributed systems. The most popular implementation of the distributed configuration store is etcd. etcd is deployed as a cluster for fault-tolerance and requires an odd number of members (n/2+1) to agree on updates to the cluster state. An etcd cluster helps establish a consensus among nodes during a failover and manages the configuration for the three PostgreSQL instances.
This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/)
This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/).
If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd).
The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command.
Expand Down Expand Up @@ -321,9 +323,6 @@ Run the following commands on all nodes. You can do this in parallel:
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
slots:
percona_cluster_1:
type: physical
postgresql:
use_pg_rewind: true
Expand All @@ -336,6 +335,10 @@ Run the following commands on all nodes. You can do this in parallel:
max_replication_slots: 10
wal_log_hints: "on"
logging_collector: 'on'
max_wal_size: '10GB'
archive_mode: "on"
archive_timeout: 600s
archive_command: "cp -f %p /home/postgres/archived/%f"
# some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
Expand Down Expand Up @@ -367,7 +370,7 @@ Run the following commands on all nodes. You can do this in parallel:
connect_address: ${NODE_IP}:5432
data_dir: ${DATA_DIR}
bin_dir: ${PG_BIN_DIR}
pgpass: /tmp/pgpass
pgpass: /tmp/pgpass0
authentication:
replication:
username: replicator
Expand Down

0 comments on commit 496053c

Please sign in to comment.