COPY INTO lets you quickly dump any Snowflake table or query result set to one or many CSV files in an internal or external stage.
Run a single COPY INTO
command that writes the table’s rows into CSV files on a stage you control. No GUI clicks, no intermediate scripts.
-- one-liner
COPY INTO @my_stage/orders_ FROM Orders FILE_FORMAT = (TYPE = CSV);
COPY INTO <stage_path> FROM <table|select> [FILE_FORMAT=(...)] [HEADER=TRUE] [OVERWRITE=TRUE]
. The stage can be @~/
(user), @my_int_stage
, or s3://bucket/
if an external stage is mapped.
FILE_FORMAT
– inline or named.Controls delimiter, compression, null handling.HEADER
– write column names as first row (TRUE|FALSE).SINGLE
– produce exactly one file (good for small tables).OVERWRITE
– replace existing files.Wrap a SELECT
in parentheses. Example: last-30-days orders.
COPY INTO @my_stage/orders_recent_
FROM (SELECT * FROM Orders WHERE order_date >= CURRENT_DATE - 30)
FILE_FORMAT=(TYPE=CSV FIELD_DELIMITER=',' NULL_IF=('NULL')) HEADER=TRUE SINGLE=TRUE;
If you use a user or internal stage, download via SnowSQL: !get @my_stage/orders_ file:///tmp/
.For S3 stages, the files already sit in the bucket.
HEADER=TRUE
for self-describing files.SINGLE=TRUE
for small tables; skip it for big data to parallelise.Wrong: COPY INTO Orders ...
. Always point to a stage, e.g., @~/orders_
.
Generates one massive file and slows export.Remove SINGLE
or set to FALSE so Snowflake shards output.
Yes—wrap COPY INTO
in a Task scheduled daily. Store files in an external S3 stage for downstream systems.
Add HEADER=TRUE
to the command or define it inside a named file format.
.
No. COPY INTO reads from the micro-partitions without blocking DML.
Not directly. First copy to an internal stage, then pull files down with SnowSQL or any compatible client.