Automate PostgreSQL schema changes, tests, and deployments through a CI/CD pipeline for safe, repeatable releases.
Automated pipelines eliminate manual SQL execution, catch migration errors early, and keep every environment in sync. Teams ship features faster while protecting production data.
GitHub Actions, GitLab CI, CircleCI, and Jenkins all support PostgreSQL. Choose the platform that already handles your application builds to keep pipelines unified.
Store SQL in a migrations/
folder. Prefix each script with an ordered number and a short description: 001_create_customers.sql
, 002_add_index_orders.sql
. The CI runner executes files in lexical order.
Spin up a postgres:15
service container, run migrations with psql
, execute unit tests, and persist artifacts. The job fails if any SQL or test returns a non-zero status.
name: db-ci
on: [push]
jobs:
test:
runs-on: ubuntu-latest
services:
db:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
ports: ["5432:5432"]
steps:
- uses: actions/checkout@v4
- name: Wait for DB
run: until pg_isready -h localhost -p 5432; do sleep 1; done
- name: Run migrations
run: |
for f in migrations/*.sql; do
psql "$POSTGRES_URL" -f "$f";
done
- name: Run unit tests
run: npm test # or pytest, go test, etc.
Pair every up
script with a down
script or create reversible SQL using tools like Sqitch. CI verifies that rollbacks succeed on a scratch database before merges.
Yes. Add seed.sql
to load product categories or initial customers. Execute it after structural migrations in the same job so tests run against realistic data.
Store non-secret defaults in the workflow file and inject secrets—POSTGRES_URL
, PGPASSWORD
—through the CI platform’s secret manager. Avoid committing credentials to Git.
✓ Run migrations on a disposable database every pull request.
✓ Keep schema and application code in the same repo.
✓ Tag releases only after pipeline success.
✓ Back up production before deploying.
No, but using a Docker container guarantees a clean database for every run and avoids host-level conflicts.
Store CSV files in seeds/
and load them with \copy
in your pipeline. Compress files or keep them in object storage if they exceed Git limits.
Break destructive changes into additive steps—add new columns, backfill, switch code, then drop old columns—to avoid locks during deployments.