Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ originalFilePath: 'src/applications.md'



Applications are supposed to work with the services created by {{name.ln}}
Applications are supposed to work with the services created by EDB Postgres for Kubernetes
in the same Kubernetes cluster.

For more information on services and how to manage them, please refer to the
Expand Down
37 changes: 0 additions & 37 deletions product_docs/docs/postgres_for_kubernetes/1/backup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,43 +5,6 @@ originalFilePath: 'src/backup.md'



!!! Info
This section covers **physical backups** in PostgreSQL.
While PostgreSQL also supports logical backups using the `pg_dump` utility,
these are **not suitable for business continuity** and are **not managed** by
{{name.ln}}. If you still wish to use `pg_dump`, refer to the
[*Troubleshooting / Emergency backup* section](troubleshooting.md#emergency-backup)
for guidance.

!!! Important
Starting with version 1.26, native backup and recovery capabilities are
being **progressively phased out** of the core operator and moved to official
CNP-I plugins. This transition aligns with {{name.ln}}' shift towards a
**backup-agnostic architecture**, enabled by its extensible
interface—**CNP-I**—which standardizes the management of **WAL archiving**,
**physical base backups**, and corresponding **recovery processes**.

{{name.ln}} currently supports **physical backups of PostgreSQL clusters** in
two main ways:

- **Via [CNPG-I](https://github.com/cloudnative-pg/cnpg-i/) plugins**: the
{{name.ln}} Community officially supports the [**Barman Cloud Plugin**](https://cloudnative-pg.io/plugin-barman-cloud/)
for integration with object storage services.

- **Natively**, with support for:

- [Object storage via Barman Cloud](backup_barmanobjectstore.md)
*(although deprecated from 1.26 in favor of the Barman Cloud Plugin)*
- [Kubernetes Volume Snapshots](backup_volumesnapshot.md), if
supported by the underlying storage class

Before selecting a backup strategy with {{name.ln}}, it's important to
familiarize yourself with the foundational concepts covered in the ["Main Concepts"](#main-concepts)
section. These include WAL archiving, hot and cold backups, performing backups
from a standby, and more.

## Main Concepts

PostgreSQL natively provides first class backup and recovery capabilities based
on file system level (physical) copy. These have been successfully used for
more than 15 years in mission critical production databases, helping
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,7 @@ originalFilePath: 'src/appendixes/backup_barmanobjectstore.md'



!!! Warning
As of {{name.ln}} 1.26, **native Barman Cloud support is deprecated** in
favor of the **Barman Cloud Plugin**. This page has been moved to the appendix
for reference purposes. While the native integration remains functional for
now, we strongly recommend beginning a gradual migration to the plugin-based
interface after appropriate testing. For guidance, see
[Migrating from Built-in {{name.ln}} Backup](https://cloudnative-pg.io/plugin-barman-cloud/docs/migration/).

{{name.ln}} natively supports **online/hot backup** of PostgreSQL
EDB Postgres for Kubernetes natively supports **online/hot backup** of PostgreSQL
clusters through continuous physical backup and WAL archiving on an object
store. This means that the database is always up (no downtime required)
and that Point In Time Recovery is available.
Expand All @@ -38,6 +30,15 @@ as it is composed of a community PostgreSQL image and the latest
in your system to take advantage of the improvements introduced in
Barman cloud (as well as improve the security aspects of your cluster).

!!! Warning "Changes in Barman Cloud 3.16+ and Bucket Creation"
Starting with Barman Cloud 3.16, most Barman Cloud commands no longer
automatically create the target bucket, assuming it already exists. Only the
`barman-cloud-check-wal-archive` command creates the bucket now. Whenever this
is not the first operation run on an empty bucket, EDB Postgres for Kubernetes will throw an
error. As a result, to ensure reliable, future-proof operations and avoid
potential issues, we strongly recommend that you create and configure your
object store bucket *before* creating a `Cluster` resource that references it.

A backup is performed from a primary or a designated primary instance in a
`Cluster` (please refer to
[replica clusters](replica_cluster.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,14 @@ originalFilePath: 'src/appendixes/backup_volumesnapshot.md'



!!! Warning
As noted in the [backup document](backup.md), a cold snapshot explicitly
set to target the primary will result in the primary being fenced for
the duration of the backup, rendering the cluster read-only during that
For safety, in a cluster already containing fenced instances, a cold
snapshot is rejected.


!!! Important
Please refer to the official Kubernetes documentation for a list of all
the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
Expand Down Expand Up @@ -257,12 +265,12 @@ for details on this standard behavior.

## Backup Volume Snapshot Deadlines

{{name.ln}} supports backups using the volume snapshot method. In some
EDB Postgres for Kubernetes supports backups using the volume snapshot method. In some
environments, volume snapshots may encounter temporary issues that can be
retried.

The `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation defines how long
{{name.ln}} should continue retrying recoverable errors before marking the
EDB Postgres for Kubernetes should continue retrying recoverable errors before marking the
backup as failed.

You can add the `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation to both
Expand All @@ -276,14 +284,14 @@ If not specified, the default retry deadline is **10 minutes**.

When a retryable error occurs during a volume snapshot operation:

1. {{name.ln}} records the time of the first error.
1. EDB Postgres for Kubernetes records the time of the first error.
2. The system retries the operation every **10 seconds**.
3. If the error persists beyond the specified deadline (or the default 10
minutes), the backup is marked as **failed**.

### Retryable Errors

{{name.ln}} treats the following types of errors as retryable:
EDB Postgres for Kubernetes treats the following types of errors as retryable:

- **Server timeout errors** (HTTP 408, 429, 500, 502, 503, 504)
- **Conflicts** (optimistic locking errors)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ originalFilePath: 'src/benchmarking.md'



The CNP kubectl plugin provides an easy way for benchmarking a PostgreSQL deployment in Kubernetes using {{name.ln}}.
The CNP kubectl plugin provides an easy way for benchmarking a PostgreSQL deployment in Kubernetes using EDB Postgres for Kubernetes.

Benchmarking is focused on two aspects:

Expand Down
10 changes: 5 additions & 5 deletions product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -626,7 +626,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```

The following manifest creates a new PostgreSQL 17.5 cluster,
The following manifest creates a new PostgreSQL 18.0 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
Expand All @@ -641,7 +641,7 @@ metadata:
name: target-db
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:17.5
imageName: quay.io/enterprisedb/postgresql:18.0-system-trixie

bootstrap:
pg_basebackup:
Expand All @@ -661,7 +661,7 @@ spec:
```

All the requirements must be met for the clone operation to work, including
the same PostgreSQL version (in our case 17.5).
the same PostgreSQL version (in our case 18.0).

#### TLS certificate authentication

Expand All @@ -676,7 +676,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.

The manifest defines a new PostgreSQL 17.5 cluster called `cluster-clone-tls`,
The manifest defines a new PostgreSQL 18.0 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
Expand All @@ -691,7 +691,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:17.5
imageName: quay.io/enterprisedb/postgresql:18.0-system-trixie

bootstrap:
pg_basebackup:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ originalFilePath: 'src/certificates.md'



{{name.ln}} was designed to natively support TLS certificates.
EDB Postgres for Kubernetes was designed to natively support TLS certificates.
To set up a cluster, the operator requires:

- A server certification authority (CA) certificate
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Key features of Cilium:
To install Cilium in your environment, follow the instructions in the documentation:
<https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/>

## Pod-to-Pod Network Security with {{name.ln}} and Cilium
## Pod-to-Pod Network Security with EDB Postgres for Kubernetes and Cilium

Kubernetes’ default behavior is to allow traffic between any two Pods in the cluster network.
Cilium provides advanced L3/L4 network security using the `CiliumNetworkPolicy` resource. This
Expand All @@ -34,7 +34,7 @@ especially useful for securing communication between application workloads and b
services.

In the following examples, we demonstrate how Cilium can be used to secure a
{{name.ln}} PostgreSQL instance by restricting ingress traffic to only
EDB Postgres for Kubernetes PostgreSQL instance by restricting ingress traffic to only
authorized Pods.

!!! Important
Expand Down Expand Up @@ -68,7 +68,7 @@ spec:
ingress: []
```

## Making Cilium Network Policies work with {{name.ln}} Operator
## Making Cilium Network Policies work with EDB Postgres for Kubernetes Operator

When working with a network policy, Cilium or not, the first step is to make
sure that the operator can reach the Pods in the target namespace. This is
Expand All @@ -86,7 +86,7 @@ metadata:
name: postgresql-operator-operator-access
namespace: default
spec:
description: "Allow {{name.ln}} operator access to any pod in the target namespace"
description: "Allow EDB Postgres for Kubernetes operator access to any pod in the target namespace"
endpointSelector: {}
ingress:
- fromEndpoints:
Expand Down Expand Up @@ -117,7 +117,7 @@ metadata:
name: cnp-cluster-internal-access
namespace: default
spec:
description: "Allow {{name.ln}} operator access and connection between pods in the same namespace"
description: "Allow EDB Postgres for Kubernetes operator access and connection between pods in the same namespace"
endpointSelector: {}
ingress:
- fromEndpoints:
Expand Down Expand Up @@ -168,7 +168,7 @@ spec:
```

This `CiliumNetworkPolicy` ensures that only Pods labeled with `role=backend`
can access the PostgreSQL instance managed by {{name.ln}} via port 5432 in
can access the PostgreSQL instance managed by EDB Postgres for Kubernetes via port 5432 in
the `default` namespace.

In the following policy, we demonstrate how to allow ingress traffic to port
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ ESO supports a wide range of backends, including:
…and many more. For a full and up-to-date list of supported providers, refer to
the [official External Secrets documentation](https://external-secrets.io/latest/).

## Integration with PostgreSQL and {{name.ln}}
## Integration with PostgreSQL and EDB Postgres for Kubernetes

When it comes to PostgreSQL databases, External Secrets integrates seamlessly
with {{name.ln}} in two major use cases:
with [EDB Postgres for Kubernetes](https://cloudnative-pg.io/) in two major use cases:

- **Automated password management:** ESO can handle the automatic generation
and rotation of database user passwords stored in Kubernetes `Secret`
Expand All @@ -50,10 +50,10 @@ every 24 hours in the `cluster-example` Postgres cluster from the
Before proceeding, ensure that the `cluster-example` Postgres cluster is up
and running in your environment.

By default, {{name.ln}} generates and manages a Kubernetes `Secret` named
By default, EDB Postgres for Kubernetes generates and manages a Kubernetes `Secret` named
`cluster-example-app`, which contains the credentials for the `app` user in the
`cluster-example` cluster. You can read more about this in the
[“Connecting from an application” section](../applications.mdx#secrets).
[“Connecting from an application” section](../applications.md#secrets).

With External Secrets, the goal is to:

Expand Down Expand Up @@ -94,7 +94,7 @@ uses a `Merge` policy to update only the specified fields (`password`, `pgpass`,
`jdbc-uri` and `uri`) in the `cluster-example-app` secret.

```yaml
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: cluster-example-app-secret
Expand All @@ -120,7 +120,7 @@ spec:
name: pg-password-generator
```

The label `k8s.enterprisedb.io/reload: "true"` ensures that {{name.ln}} triggers a reload
The label `k8s.enterprisedb.io/reload: "true"` ensures that EDB Postgres for Kubernetes triggers a reload
of the user password in the database when the secret changes.

### Verifying the Configuration
Expand All @@ -145,7 +145,7 @@ rotation is working correctly.
### There's More

While the example above focuses on the default `cluster-example-app` secret
created by {{name.ln}}, the same approach can be extended to manage any
created by EDB Postgres for Kubernetes, the same approach can be extended to manage any
custom secrets or PostgreSQL users you create to regularly rotate their
password.

Expand All @@ -158,15 +158,15 @@ actively maintained open source alternative is available: [OpenBao](https://open
OpenBao supports all the same interfaces as HashiCorp Vault, making it a true
drop-in replacement.

In this example, we'll demonstrate how to integrate {{name.ln}},
In this example, we'll demonstrate how to integrate EDB Postgres for Kubernetes,
External Secrets Operator, and HashiCorp Vault to automatically rotate
a PostgreSQL password and securely store it in Vault.

!!! Important
This example assumes that HashiCorp Vault is already installed and properly
configured in your environment, and that your team has the necessary expertise
to operate it. There are various ways to deploy Vault, and detailing them is
outside the scope of {{name.ln}}. While it's possible to run Vault inside
outside the scope of EDB Postgres for Kubernetes. While it's possible to run Vault inside
Kubernetes, it is more commonly deployed externally. For detailed instructions,
consult the [HashiCorp Vault documentation](https://www.vaultproject.io/docs).

Expand All @@ -182,7 +182,7 @@ named `vault-token` exists in the same namespace, containing the token used to
authenticate with Vault.

```yaml
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: vault-backend
Expand Down
Loading