Importing a CSV loads external comma-separated data into Snowflake tables.
COPY INTO streams CSV files from a stage into a table in parallel, handling compression, field delimiters, and errors automatically.
GRANT USAGE on the warehouse, database, schema, and stage plus INSERT on the target table.Without them, COPY INTO will fail.
CREATE FILE FORMAT my_csv TYPE = 'CSV' FIELD_OPTIONALLY_ENCLOSED_BY='"' SKIP_HEADER=1; keeps quoted text intact and ignores the header row.
Put the file in an S3 bucket or internal stage: PUT file:///tmp/customers.csv @%Customers AUTO_COMPRESS=TRUE;
Run COPY INTO Customers FROM @mystage/customers.csv FILE_FORMAT = (FORMAT_NAME = my_csv); Snowflake maps columns by position.
Use COPY INTO ...VALIDATION_MODE='RETURN_ERRORS' to preview errors without inserting data.
YES: COPY INTO Customers FROM @mystage FILE_FORMAT=(FORMAT_NAME=my_csv) PATTERN='.*\.csv.gz';
Add ON_ERROR='SKIP_FILE' or 'CONTINUE' to ignore faulty rows and complete the load.
Compress files (gzip), keep them <100 MB, define explicit file formats, and log COPY_HISTORY to monitor load success.
Create a task that calls a stored procedure containing the COPY INTO command.Schedule the task with a CRON expression.
.
No. Snowflake maps by column position unless you specify the column list in COPY INTO.
Yes. Use PUT to upload to an internal stage, then COPY INTO from that stage.
Query TABLE(load_history()) or INFORMATION_SCHEMA.LOAD_HISTORY for detailed status and error info.