Use pg_cron in ParadeDB to run SQL automatically at set intervals.
Automated scheduling keeps dashboards fresh, recalculates aggregates, and cleans data without manual effort. ParadeDB inherits PostgreSQL’s extensibility, so the pg_cron extension is the simplest way to run timed jobs.
ParadeDB itself does not bundle a scheduler. Instead, enable the pg_cron extension on the underlying PostgreSQL instance and create jobs with SELECT cron.schedule(...)
.
Superusers run: CREATE EXTENSION IF NOT EXISTS pg_cron;
. In managed ParadeDB clouds, ask the provider to enable it or spin up a self-hosted instance with shared_preload_libraries='pg_cron'.
Use SELECT cron.schedule(job_name text, schedule text, command text)
. The schedule
uses standard cron format. Any valid SQL fits in command
.
Example: SELECT cron.schedule('refresh_order_sales', '0 * * * *', $$REFRESH MATERIALIZED VIEW CONCURRENTLY order_sales_mv$$);
Yes. Store parameters in a settings table or embed them directly. For daily out-of-stock alerts: SELECT cron.schedule('low_stock_email', '30 6 * * *', $$CALL send_low_stock_email()$$);
List: SELECT * FROM cron.job;
. Run immediately: SELECT cron.run_job(jobid)
. Remove: SELECT cron.unschedule(jobid)
.
Keep jobs idempotent, log results to a table, and use CONCURRENTLY
or NOWAIT
to reduce locks.
Only superusers can create jobs by default. Grant cron_admin
to trusted roles: GRANT cron_admin TO analyst;
Query cron.job_run_details
for start_time, end_time, and status. Alert on consecutive failures with external tooling.
If you need second-level granularity or cross-cluster orchestration, use an external workflow engine like Airflow instead.
No direct UPDATE; unschedule then reschedule with new parameters.
It does not. Capture failures in cron.job_run_details
and build alerts or retries in external tooling.
Yes, but only the primary runs jobs. Ensure automatic failover promotes the extension and copies job tables.