Export a ClickHouse table’s rows to a client-side CSV file with one command.
CSV is the simplest way to move ClickHouse data into spreadsheets, BI tools, or other databases. It keeps column order, supports large result sets, and avoids vendor lock-in.
Run clickhouse-client --query "SELECT * FROM db.table FORMAT CSV" > file.csv
. The client streams rows directly to stdout
, bypassing intermediate files for maximum throughput.
Add the setting --output_format_csv_with_names=1
.Example: clickhouse-client --query "SELECT * FROM Orders FORMAT CSV" --output_format_csv_with_names=1 > orders.csv
.
Yes. Any valid WHERE
, ORDER BY
, or LIMIT
clause works, e.g., SELECT * FROM Orders WHERE order_date >='2023-01-01' FORMAT CSV
.
List the columns explicitly: SELECT id, name FROM Customers FORMAT CSV
.This trims the file size and reduces downstream parsing.
Use --max_block_size
to tune memory usage and pipe through gzip
to compress on the fly: clickhouse-client --max_block_size=50000 --query "SELECT * FROM OrderItems FORMAT CSV" | gzip > order_items.csv.gz
.
When re-importing, store date/time columns as ISO strings to avoid parsing issues in other tools.
Set --use_client_time_zone=0
so all dates export in UTC, eliminating confusion across teams.
Use INTO OUTFILE
when connected via HTTP interface: SELECT * FROM Products INTO OUTFILE 'products.csv' FORMAT CSV
.The file is created on the server host.
Create a reusable function:export_csv(){ clickhouse-client --query "$1 FORMAT CSV" --output_format_csv_with_names=1 > "$2"; }
Call with export_csv "SELECT * FROM Customers" customers.csv
.
.
Yes. Set --output_format_csv_quote_char
and --output_format_csv_escape_char
if your data contains commas or quotes.
Pipe the output to aws s3 cp - s3://bucket/key.csv
or use the built-in FILE
engine pointing to an S3 URL.