Registers a SQL query as a scheduled job that reruns at a fixed interval and stores or exports the results automatically.
Automating reports guarantees up-to-date analytics, removes manual query runs, and reduces human error in recurring dashboards or CSV exports.
Only superusers or roles granted EXECUTE on the parade.report_schedule() function and USAGE on pg_cron can create, modify, or drop scheduled reports.
Call parade.report_schedule() with a unique report_name, the SQL query to run, a cron-style schedule, and an output destination.
SELECT parade.report_schedule(
report_name => 'daily_sales',
sql_query => $$
SELECT order_date::date AS day,
sum(total_amount) AS total_sales,
count(*) AS orders
FROM Orders
WHERE order_date >= current_date - interval '1 day'
GROUP BY 1;
$$,
cron_schedule => '0 2 * * *', -- 02:00 UTC daily
destination => jsonb_build_object(
'type', 's3',
'bucket', 'company-reports',
'path', 'daily_sales/{{DATE}}.csv')
);
Query parade.report_jobs to see name, schedule, next_run, owner, and whether the job is enabled.
SELECT job_id, report_name, cron_schedule, next_run
FROM parade.report_jobs
ORDER BY next_run;
Temporarily pause with parade.report_enable(job_id,false) or remove permanently with parade.report_drop(job_id).
-- pause
SELECT parade.report_enable(42,false);
-- drop
SELECT parade.report_drop(42);
Use parade.report_update() to modify sql_query, cron_schedule, or destination without recreating the job.
SELECT parade.report_update(
job_id => 42,
cron_schedule => '0 */6 * * *' -- every 6 hours
);
Create covering indexes for large tables used in report queries. Always store timestamps in UTC to avoid daylight-saving surprises. Log to a dedicated schema to keep metadata tidy.
Check parade.report_logs(job_id) for stderr, runtime, and exit_status. Rerun the underlying SQL manually to isolate data issues.
Scheduling at local midnight instead of UTC causes double or missed runs during DST changes. Always use UTC cron strings.
Exporting millions of rows without compression inflates S3 costs. Add destination option "compression":"gzip" or aggregate first.
Wrap REFRESH MATERIALIZED VIEW CONCURRENTLY in parade.report_schedule() to combine scheduling with index-friendly refreshes.
Set up alerts with pg_notify after each run, and version control your job definitions in a migrations folder to keep infra reproducible.
Yes. Qualify objects inside sql_query (e.g., analytics.daily_sales). The job runs with the search_path of owner_role.
ParadeDB bundles pg_cron internally, but you still need to LOAD 'pg_cron' in postgresql.conf or via shared_preload_libraries.
Set destination {"type":"s3","format":"json"}. ParadeDB serializes each row as JSON and uploads a .json file.