Service to backup and/or restore a PostgreSQL database to/from S3
- Create an S3 bucket to hold your backups
- Turn versioning on for that bucket
- Supply all appropriate environment variables
- Run a backup and check your bucket for that backup
MODE Valid values: backup, restore
DB_HOST hostname of the database server
DB_NAME name of the database
DB_OPTIONS optional arguments to supply to the backup or restore commands
DB_ROOTPASSWORD password for the DB_ROOTUSER
DB_ROOTUSER database administrative user, typically "postgres" for PostgreSQL databases
DB_USERPASSWORD password for the DB_USER
DB_USER user that accesses the database (PostgreSQL "role")
AWS_ACCESS_KEY_ID used for S3 interactions
AWS_SECRET_ACCESS_KEY used for S3 interactions
AWS_ACCESS_KEY used for S3 interactions (Deprecated)
AWS_SECRET_KEY used for S3 interactions (Deprecated)
S3_BUCKET e.g., s3://database-backups NOTE: no trailing slash
It's recommended that your S3 bucket have versioning turned on. Each backup creates a file of the form DB_NAME.sql.gz. If versioning is not turned on, the previous backup file will be replaced with the new one, resulting in a single level of backups.
B2_BUCKET (optional) Name of the Backblaze B2 bucket, e.g., database-backups. When B2_BUCKET is defined, the backup file is copied to the B2 bucket in addition to the S3 bucket.
It's recommended that your B2 bucket have versioning and encryption turned on. Each backup creates a file of the form DB_NAME.sql.gz. If versioning is not turned on, the previous backup file will be replaced with the new one, resulting in a single level of backups. Encryption may offer an additional level of protection from attackers. It also has the side effect of preventing downloads of the file via the Backblaze GUI (you'll have to use the
b2command or the Backblaze API).
B2_APPLICATION_KEY_ID (optional; required if B2_BUCKET is defined) Backblaze application key ID
B2_APPLICATION_KEY (optional; required if B2_BUCKET is defined) Backblaze application key secret
B2_HOST (optional; required if B2_BUCKET is defined) Backblaze B2 bucket's Endpoint
This image is built automatically on Docker Hub as silintl/postgresql-backup-restore
You'll need Docker, Docker Compose, and Make.
- Copy
local.env.disttolocal.env. - Edit
local.envto supply values for the variables. - Ensure you have a
gzdump in your S3 bucket to be used for testing. A test database is provided as part of this project in thetestfolder. You can copy it to S3 as follows:
aws s3 cp test/world.sql.gz ${S3_BUCKET}/world.sql.gz
make db# creates the Postgres DB servermake restore# restores the DB dump filedocker ps -a# get the Container ID of the exited restore containerdocker logs <containerID># review the restoration log messagesmake backup# create a new DB dump filedocker ps -a# get the Container ID of the exited backup containerdocker logs <containerID># review the backup log messagesmake restore# restore the DB dump file from the new backupdocker ps -a# get the Container ID of the exited restore containerdocker logs <containerID># review the restoration log messagesmake clean# remove containers and networkdocker volume ls# find the volume ID of the Postgres data containerdocker volume rm <volumeID># remove the data volumedocker images# list existing imagesdocker image rm <imageID ...># remove images no longer needed