EXPORT SCHEMA writes the DDL of one or more ClickHouse databases or tables to disk so you can version-control or migrate the structures without data.
EXPORT SCHEMA lets you dump only the database objects—databases, tables, views, materialized views—so you can move structures between clusters or keep them in Git without transferring terabytes of data.
Run EXPORT SCHEMA myshop_db TO '/var/ck_dumps/myshop_ddl/' FORMAT SQL;. ClickHouse creates one .sql file per object, each containing a CREATE statement that can be replayed on another server.
Yes.Separate multiple tables with commas: EXPORT SCHEMA myshop_db.Customers, myshop_db.Orders TO '/tmp/ecom_schema/' FORMAT SQL;.
Engine, partition keys, ordering, TTL, comments, and settings are always present in the generated DDL, guaranteeing a byte-for-byte recreation.
SQL is default; Native and JSONEachRow are available when you need programmatic parsing.Specify with FORMAT keyword.
ClickHouse must have write permission on the target path, which must reside on the same server where the query runs. Use an NFS mount for remote storage.
Pipe the .sql files into clickhouse-client --multiquery or use psql-style \i commands inside the client. For Native/JSONEachRow, use IMPORT SCHEMA.
Commit the generated .sql files to Git after every migration.Pair with CI to apply the DDL automatically in staging.
Schedule nightly EXPORT SCHEMA jobs. Keep only the last N dumps to save disk space.
Wrong path permissions: EXPORT fails silently when ClickHouse lacks write rights. Fix with chown/chmod.
Missing FORMAT keyword: Omitting FORMAT defaults to SQL. Explicitly set FORMAT Native when you need high-speed IMPORT.
Use SHOW CREATE TABLE for a quick single-table DDL.Use BACKUP for both schema and data.
.
No. ClickHouse reads system tables to build DDL, so no data locks occur and the command is fast.
Not yet. List each table explicitly or export the full database and delete unwanted .sql files.
Yes. Re-running the generated CREATE statements on an empty cluster reproduces the same structures.