Why you need backup monitoring
It's not enough to run backups — you need to know they're working. A failed backup script that goes unnoticed can lead to data loss.
Step 1: Create a PostgreSQL backup script
Here's a basic shell script to dump your database to a file:
#!/bin/bash
set -e
DATE=$(date +"%Y-%m-%d_%H-%M-%S")
BACKUP_FILE="/backups/db_$DATE.sql"
pg_dump -U postgres -h db -F c -f "$BACKUP_FILE"
Make sure to mount a volume at /backups
so the file is stored persistently.
Step 2: Add monitoring pings
Wrap the backup script with pings to your monitoring service:
#!/bin/bash
START_URL="https://ev.okchecker.com/p/<api-key>/backup-db?status=start"
SUCCESS_URL="https://ev.okchecker.com/p/<api-key>/backup-db?status=success"
FAIL_URL="https://ev.okchecker.com/p/<api-key>/backup-db?status=fail"
curl -fsS "$START_URL" || true
if pg_dump -U postgres -h db -F c -f "/backups/db_$(date +%F_%H-%M-%S).sql"; then
curl -fsS "$SUCCESS_URL" || true
else
curl -fsS "$FAIL_URL" || true
fi
This way, you’ll get alerts if the backup fails or hangs.
Step 3: Run in Docker with cron
Create a Dockerfile
for your backup job:
FROM postgres:16
COPY backup.sh /backup.sh
RUN chmod +x /backup.sh
CMD ["/backup.sh"]
Then run it periodically with a cron job outside Docker, or use a scheduler like cron
inside the container (with supercronic
or cron
).
Optional: Upload to cloud storage
You can extend the script to upload backups to S3, Google Cloud, or other providers. Just make sure to still ping before and after.
Conclusion
By combining Dockerized backups with pings, you get both reliability and observability. No more guessing if your backups work.
Start Monitoring