- Overview
- Features
- Prerequisites
- Quick Start
- Installation
- Configuration
- Usage
- Backup Process
- Restore Process
- Cloud Storage Integration
- Monitoring & Logs
- Troubleshooting
- Security Best Practices
- FAQ
This automated backup system provides enterprise-grade database protection for La Tanda's PostgreSQL database. It includes automated daily backups, compression, intelligent rotation, cloud storage integration, and comprehensive logging.
┌─────────────────────────────────────────────────────────────┐
│ Backup System Flow │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. Cron Trigger (Daily 2:00 AM) │
│ ↓ │
│ 2. backup-database.sh │
│ ↓ │
│ 3. PostgreSQL pg_dump │
│ ↓ │
│ 4. Gzip Compression │
│ ↓ │
│ 5. Save to /var/backups/latanda/ │
│ ↓ │
│ 6. Upload to Cloud (Optional) │
│ ├─→ AWS S3 │
│ └─→ Google Cloud Storage │
│ ↓ │
│ 7. Rotate Old Backups (30 days) │
│ ↓ │
│ 8. Send Notification │
│ ├─→ Email │
│ └─→ Logs │
│ │
└─────────────────────────────────────────────────────────────┘
✅ Automated Daily Backups - Runs automatically via cron at 2:00 AM
✅ Gzip Compression - Reduces backup size by 80-90%
✅ 30-Day Retention - Automatically deletes backups older than 30 days
✅ Cloud Storage - Optional upload to AWS S3 or Google Cloud Storage
✅ Email Notifications - Success/failure alerts
✅ Comprehensive Logging - Detailed logs for audit and debugging
✅ Error Handling - Automatic cleanup on failure
✅ Easy Restore - Simple one-command restore process
✅ Security - Password protection and secure credential management
-
PostgreSQL Client Tools
# Ubuntu/Debian sudo apt-get update sudo apt-get install postgresql-client # CentOS/RHEL sudo yum install postgresql # macOS brew install postgresql
-
Gzip (usually pre-installed)
# Verify installation gzip --version -
Bash (version 4.0 or higher)
bash --version
-
AWS CLI (for S3 upload)
# Ubuntu/Debian sudo apt-get install awscli # macOS brew install awscli # Or use pip pip install awscli
-
Google Cloud SDK (for GCS upload)
# Follow instructions at: # https://cloud.google.com/sdk/docs/install
- Disk Space: At least 2x your database size for backups
- Memory: Minimum 512MB RAM
- Permissions: Write access to backup and log directories
- Network: Internet connection (for cloud uploads)
Get up and running in 5 minutes:
# 1. Navigate to the project directory
cd /path/to/la-tanda-web-main
# 2. Copy the environment template
cp env.backup.example .env.backup
# 3. Edit configuration with your database credentials
nano .env.backup
# 4. Make scripts executable
chmod +x scripts/*.sh
# 5. Test the system
./scripts/test-backup-system.sh
# 6. Set up automated backups
sudo ./scripts/setup-cron.sh
# 7. Run your first backup (optional - test immediately)
./scripts/backup-database.shIf you haven't already, ensure you have the latest version:
cd /path/to/la-tanda-web-main
git pull origin mainThe backup system will create these directories automatically, but you can create them manually:
# Create backup directory
sudo mkdir -p /var/backups/latanda
sudo chown $USER:$USER /var/backups/latanda
# Create log directory
sudo mkdir -p /var/log/latanda
sudo chown $USER:$USER /var/log/latandachmod +x scripts/backup-database.sh
chmod +x scripts/restore-database.sh
chmod +x scripts/setup-cron.sh
chmod +x scripts/test-backup-system.sh
chmod +x scripts/setup-cloud-storage.sh./scripts/test-backup-system.shThis will check all prerequisites and configuration.
-
Copy the environment template:
cp env.backup.example .env.backup
-
Edit the configuration file:
nano .env.backup
-
Configure database settings:
# Database Configuration DB_NAME=latanda_db DB_USER=latanda_user DB_HOST=localhost DB_PORT=5432 DB_PASSWORD=your_secure_password_here # Backup Configuration BACKUP_DIR=/var/backups/latanda RETENTION_DAYS=30 LOG_DIR=/var/log/latanda
# Use a different backup directory
BACKUP_DIR=/mnt/external-drive/backups/latanda# Keep backups for 60 days instead of 30
RETENTION_DAYS=60# Enable email notifications
ENABLE_EMAIL=true
EMAIL_TO=admin@latanda.com
EMAIL_FROM=backup@latanda.com
SMTP_SERVER=smtp.gmail.comNote: Email notifications require a configured mail server (postfix, sendmail, or similar).
Run a backup manually at any time:
./scripts/backup-database.shOutput:
[2025-01-22 14:30:00] [INFO] ==========================================
[2025-01-22 14:30:00] [INFO] Starting La Tanda Database Backup
[2025-01-22 14:30:00] [INFO] ==========================================
[2025-01-22 14:30:00] [INFO] Database: latanda_db
[2025-01-22 14:30:00] [INFO] Backup file: /var/backups/latanda/latanda_backup_20250122_143000.sql.gz
[2025-01-22 14:30:00] [INFO] Retention: 30 days
[2025-01-22 14:30:00] [INFO] Creating database dump...
[2025-01-22 14:30:15] [SUCCESS] Database backup completed successfully
[2025-01-22 14:30:15] [INFO] Backup size: 45M
Set up automated daily backups:
sudo ./scripts/setup-cron.shThis creates a cron job that runs daily at 2:00 AM.
Verify cron job:
crontab -lExpected output:
0 2 * * * /path/to/scripts/backup-database.sh >> /var/log/latanda/cron.log 2>&1
ls -lh /var/backups/latanda/Example output:
-rw-r--r-- 1 user user 45M Jan 22 02:00 latanda_backup_20250122_020000.sql.gz
-rw-r--r-- 1 user user 44M Jan 21 02:00 latanda_backup_20250121_020000.sql.gz
-rw-r--r-- 1 user user 46M Jan 20 02:00 latanda_backup_20250120_020000.sql.gz
-
Initialization
- Creates backup and log directories if they don't exist
- Loads configuration from
.env.backup - Sets up error handling and logging
-
Database Dump
- Connects to PostgreSQL using configured credentials
- Performs a complete database dump using
pg_dump - Includes all tables, data, and schema (23 tables for latanda_db)
-
Compression
- Compresses the SQL dump using gzip
- Typically reduces size by 80-90%
- Example: 500MB database → 50MB backup file
-
Cloud Upload (if enabled)
- Uploads compressed backup to AWS S3 or Google Cloud Storage
- Uses standard storage class for S3 (STANDARD_IA)
- Maintains same filename in cloud storage
-
Rotation
- Finds backups older than retention period (default: 30 days)
- Deletes old backups to save disk space
- Logs all deletions
-
Notification
- Sends email notification (if configured)
- Logs success/failure with details
- Records backup size and timing
latanda_backup_YYYYMMDD_HHMMSS.sql.gz
Example:
latanda_backup_20250122_143000.sql.gz
↑ ↑ ↑
| | └─ Time: 14:30:00 (2:30 PM)
| └────────── Date: 2025-01-22
└───────────────────────── Prefix
Restore from the most recent backup:
# 1. List available backups
ls -lh /var/backups/latanda/
# 2. Restore from a specific backup
./scripts/restore-database.sh /var/backups/latanda/latanda_backup_20250122_020000.sql.gz# List all backups with details
ls -lht /var/backups/latanda/
# Or use the restore script to see available backups
./scripts/restore-database.shBefore restoring, stop your application to prevent data inconsistency:
# Stop your Node.js application
pm2 stop la-tanda
# Or if using systemd
sudo systemctl stop latanda./scripts/restore-database.sh /var/backups/latanda/latanda_backup_20250122_020000.sql.gzYou will be prompted for confirmation:
[2025-01-22 15:00:00] [INFO] Starting database restore
[2025-01-22 15:00:00] [INFO] Backup file: /var/backups/latanda/latanda_backup_20250122_020000.sql.gz
[2025-01-22 15:00:00] [INFO] Target database: latanda_db
This will overwrite the current database. Are you sure? (yes/no):
Type yes to proceed.
# Connect to database
psql -U latanda_user -d latanda_db
# Check table count
\dt
# Verify data
SELECT COUNT(*) FROM users;
SELECT COUNT(*) FROM tandas;
# Exit
\q# Restart your application
pm2 restart la-tanda
# Or if using systemd
sudo systemctl start latandaIf you need to restore from cloud storage:
# 1. Download backup from S3
aws s3 cp s3://your-bucket/backups/latanda_backup_20250122_020000.sql.gz /tmp/
# 2. Restore from downloaded file
./scripts/restore-database.sh /tmp/latanda_backup_20250122_020000.sql.gz# 1. Download backup from GCS
gsutil cp gs://your-bucket/backups/latanda_backup_20250122_020000.sql.gz /tmp/
# 2. Restore from downloaded file
./scripts/restore-database.sh /tmp/latanda_backup_20250122_020000.sql.gzIf the restore script fails, you can restore manually:
# 1. Set password (if needed)
export PGPASSWORD='your_password'
# 2. Drop existing database
dropdb -h localhost -U latanda_user latanda_db
# 3. Create new database
createdb -h localhost -U latanda_user latanda_db
# 4. Restore from backup
gunzip -c /var/backups/latanda/latanda_backup_20250122_020000.sql.gz | \
psql -h localhost -U latanda_user -d latanda_db
# 5. Unset password
unset PGPASSWORD# Ubuntu/Debian
sudo apt-get install awscli
# macOS
brew install awscli
# Or use pip
pip install awscliaws configureEnter your credentials:
AWS Access Key ID: YOUR_ACCESS_KEY
AWS Secret Access Key: YOUR_SECRET_KEY
Default region name: us-east-1
Default output format: json
# Create bucket
aws s3 mb s3://latanda-backups --region us-east-1
# Verify bucket
aws s3 lsEdit .env.backup:
ENABLE_CLOUD_UPLOAD=true
CLOUD_PROVIDER=s3
S3_BUCKET=latanda-backups
S3_REGION=us-east-1./scripts/backup-database.shCheck the logs to verify upload:
tail -f /var/log/latanda/backup.log# Follow instructions at:
# https://cloud.google.com/sdk/docs/installgcloud auth login
gcloud config set project YOUR_PROJECT_ID# Create bucket
gsutil mb -p YOUR_PROJECT_ID gs://latanda-backups
# Verify bucket
gsutil lsEdit .env.backup:
ENABLE_CLOUD_UPLOAD=true
CLOUD_PROVIDER=gcs
GCS_BUCKET=latanda-backups./scripts/backup-database.sh-
Use Lifecycle Policies
- Move old backups to cheaper storage tiers
- Automatically delete very old backups
-
Enable Versioning
- Protect against accidental deletion
- Keep multiple versions of backups
-
Set Up Access Controls
- Use IAM roles with minimal permissions
- Enable bucket encryption
-
Monitor Costs
- Set up billing alerts
- Review storage usage monthly
-
Test Restores
- Periodically download and test backups
- Verify data integrity
AWS S3 (us-east-1):
- Storage: $0.023/GB/month (Standard-IA)
- Upload: Free
- Download: $0.01/GB
Google Cloud Storage:
- Storage: $0.020/GB/month (Nearline)
- Upload: Free
- Download: $0.01/GB
Example: 50GB of backups = ~$1-2/month
The backup system maintains detailed logs:
# Main backup log
/var/log/latanda/backup.log
# Restore log
/var/log/latanda/restore.log
# Cron execution log
/var/log/latanda/cron.log# View last 50 lines of backup log
tail -n 50 /var/log/latanda/backup.log
# Follow backup log in real-time
tail -f /var/log/latanda/backup.log
# Search for errors
grep ERROR /var/log/latanda/backup.log
# View today's backups
grep "$(date +%Y-%m-%d)" /var/log/latanda/backup.log[YYYY-MM-DD HH:MM:SS] [LEVEL] Message
Example:
[2025-01-22 02:00:00] [INFO] Starting La Tanda Database Backup
[2025-01-22 02:00:15] [SUCCESS] Database backup completed successfully
[2025-01-22 02:00:20] [INFO] Backup uploaded to S3 successfully
Daily:
- Check if backup ran successfully
- Verify backup file was created
- Check log for errors
Weekly:
- Review backup sizes (watch for unusual growth)
- Verify cloud uploads (if enabled)
- Check disk space usage
Monthly:
- Test a restore procedure
- Review retention policy
- Audit access logs
- Review cloud storage costs
Set up monitoring alerts using cron:
# Add to crontab
0 3 * * * /path/to/scripts/check-backup-status.shCreate a simple monitoring script:
#!/bin/bash
# check-backup-status.sh
BACKUP_DIR="/var/backups/latanda"
TODAY=$(date +%Y%m%d)
if ls ${BACKUP_DIR}/latanda_backup_${TODAY}_*.sql.gz 1> /dev/null 2>&1; then
echo "✓ Backup successful for $(date +%Y-%m-%d)"
else
echo "✗ No backup found for $(date +%Y-%m-%d)" | mail -s "ALERT: Backup Missing" admin@latanda.com
fiError:
mkdir: cannot create directory '/var/backups/latanda': Permission denied
Solution:
# Create directories with proper permissions
sudo mkdir -p /var/backups/latanda /var/log/latanda
sudo chown $USER:$USER /var/backups/latanda /var/log/latandaError:
pg_dump: command not found
Solution:
# Install PostgreSQL client tools
# Ubuntu/Debian
sudo apt-get install postgresql-client
# CentOS/RHEL
sudo yum install postgresql
# macOS
brew install postgresqlError:
pg_dump: error: connection to server failed: FATAL: password authentication failed
Solution:
# Check your .env.backup file
nano .env.backup
# Verify credentials
DB_PASSWORD=your_correct_password
# Test connection manually
psql -h localhost -U latanda_user -d latanda_dbError:
gzip: write error: No space left on device
Solution:
# Check disk space
df -h
# Clean up old backups manually
rm /var/backups/latanda/latanda_backup_2024*.sql.gz
# Reduce retention period in .env.backup
RETENTION_DAYS=15Problem: Backups not running automatically
Solution:
# Check if cron service is running
sudo systemctl status cron
# Start cron if stopped
sudo systemctl start cron
# Verify cron job exists
crontab -l
# Check cron logs
grep CRON /var/log/syslog
# Re-run setup
./scripts/setup-cron.shError:
Failed to upload backup to S3
Solution:
# Test AWS credentials
aws s3 ls
# Reconfigure AWS
aws configure
# Check bucket permissions
aws s3api get-bucket-acl --bucket your-bucket-name
# Test manual upload
aws s3 cp /var/backups/latanda/test.txt s3://your-bucket/Error during restore:
gzip: stdin: invalid compressed data--format violated
Solution:
# Test backup file integrity
gunzip -t /var/backups/latanda/latanda_backup_20250122_020000.sql.gz
# If corrupted, use a different backup
ls -lht /var/backups/latanda/
# Download from cloud if available
aws s3 cp s3://your-bucket/backups/latanda_backup_20250121_020000.sql.gz /tmp/Run backup script with verbose output:
# Enable debug mode
bash -x ./scripts/backup-database.shIf you encounter issues not covered here:
- Check the logs:
/var/log/latanda/backup.log - Run the test script:
./scripts/test-backup-system.sh - Review PostgreSQL logs:
/var/log/postgresql/ - Search GitHub issues: La Tanda Repository
- Contact support: support@latanda.com
# Set restrictive permissions on .env.backup
chmod 600 .env.backup
# Ensure only owner can read
ls -l .env.backup
# Should show: -rw------- 1 user user# Set permissions on backup directory
chmod 700 /var/backups/latanda
# Encrypt backups (optional)
# Add to backup script:
gpg --encrypt --recipient admin@latanda.com backup.sql.gzInstead of storing passwords in .env.backup, use .pgpass:
# Create .pgpass file
echo "localhost:5432:latanda_db:latanda_user:your_password" > ~/.pgpass
chmod 600 ~/.pgpass
# Remove DB_PASSWORD from .env.backup# Rotate AWS credentials every 90 days
aws iam create-access-key --user-name backup-user
# Update credentials
aws configure# Log all backup operations
# Already included in backup-database.sh
# Review logs regularly
grep -i "backup\|restore" /var/log/latanda/*.log# If database is on remote server, use SSH tunnel
ssh -L 5432:localhost:5432 user@db-server
# Or configure PostgreSQL pg_hba.conf to allow only specific IPs# Quarterly disaster recovery drill:
# 1. Simulate data loss
# 2. Restore from backup
# 3. Verify data integrity
# 4. Document time to recoverAdd checksum verification:
# Generate checksum after backup
sha256sum backup.sql.gz > backup.sql.gz.sha256
# Verify before restore
sha256sum -c backup.sql.gz.sha256Q: How long does a backup take?
A: Typically 1-5 minutes for databases under 1GB. Larger databases may take 10-30 minutes. The exact time depends on database size, server performance, and compression speed.
Q: How much disk space do I need?
A: Plan for at least 2x your database size. For example, if your database is 500MB, compressed backups will be ~50MB each. With 30 days retention, you'll need ~1.5GB.
Q: Can I change the backup schedule?
A: Yes. Edit the cron job:
crontab -e
# Change: 0 2 * * * to your preferred time
# Example for 3:30 AM: 30 3 * * *Q: Can I run multiple backups per day?
A: Yes. Add multiple cron entries:
0 2 * * * /path/to/backup-database.sh # 2:00 AM
0 14 * * * /path/to/backup-database.sh # 2:00 PMQ: What happens if a backup fails?
A: The script will:
- Log the error
- Clean up incomplete backup files
- Send email notification (if configured)
- Exit with error code
Q: Can I backup specific tables only?
A: Yes. Modify the pg_dump command in backup-database.sh:
pg_dump -t users -t tandas -t transactions ...Q: How long does a restore take?
A: Usually 2-10 minutes for databases under 1GB. Larger databases may take 20-60 minutes.
Q: Will restore overwrite my current database?
A: Yes. The restore script drops the existing database and creates a new one. Always backup current data before restoring.
Q: Can I restore to a different database name?
A: Yes. Edit the restore script or specify manually:
gunzip -c backup.sql.gz | psql -d different_db_nameQ: Can I restore specific tables only?
A: Yes, but it requires manual extraction:
# Extract specific table
gunzip -c backup.sql.gz | grep -A 10000 "CREATE TABLE users" > users.sql
psql -d latanda_db -f users.sqlQ: Is cloud storage required?
A: No, it's optional. Local backups are sufficient for many use cases. Cloud storage provides additional protection against server failure.
Q: Which cloud provider should I use?
A: Both AWS S3 and Google Cloud Storage are excellent. Choose based on:
- Your existing cloud infrastructure
- Cost preferences
- Geographic requirements
Q: How much does cloud storage cost?
A: For 50GB of backups:
- AWS S3: ~$1.15/month
- Google Cloud Storage: ~$1.00/month
Q: Can I use both S3 and GCS?
A: Not simultaneously with the current script, but you can modify it to upload to both providers.
Q: Are backups encrypted?
A: Backups are compressed but not encrypted by default. For encryption, add GPG encryption to the backup script or use cloud provider encryption.
Q: Who can access the backups?
A: Only users with file system access to /var/backups/latanda/ and cloud storage credentials can access backups.
Q: How do I secure my database password?
A: Use PostgreSQL's .pgpass file instead of storing passwords in .env.backup. See Security Best Practices.
Q: Should I backup the backup scripts?
A: Yes, keep the scripts in version control (Git). The .env.backup file should NOT be committed to Git.
Q: Backup script fails with "command not found"
A: Install required tools:
sudo apt-get install postgresql-client gzipQ: How do I view backup logs?
A:
tail -f /var/log/latanda/backup.logQ: Cron job not running?
A: Check cron service and verify job:
sudo systemctl status cron
crontab -lQ: How do I test the backup system?
A: Run the test script:
./scripts/test-backup-system.shYou now have a complete, production-ready database backup system for La Tanda with:
✅ Automated daily backups via cron
✅ Gzip compression reducing storage by 80-90%
✅ 30-day rotation automatically cleaning old backups
✅ Cloud storage integration for AWS S3 and Google Cloud Storage
✅ Email notifications for success/failure alerts
✅ Comprehensive logging for audit and debugging
✅ Easy restore process with one-command recovery
✅ Security best practices for protecting sensitive data
# Run manual backup
./scripts/backup-database.sh
# Set up automated backups
sudo ./scripts/setup-cron.sh
# Test backup system
./scripts/test-backup-system.sh
# List available backups
ls -lh /var/backups/latanda/
# Restore from backup
./scripts/restore-database.sh /var/backups/latanda/latanda_backup_YYYYMMDD_HHMMSS.sql.gz
# View logs
tail -f /var/log/latanda/backup.log
# Set up cloud storage
./scripts/setup-cloud-storage.shFor issues, questions, or contributions:
- GitHub Issues: La Tanda Repository Issues
- Email: support@latanda.com
- Documentation: This guide
Version: 1.0.0
Last Updated: January 22, 2025
Maintainer: La Tanda Development Team