BULK INSERT quickly loads data from a CSV file into an existing SQL Server table.
BULK INSERT streams the entire file in one operation, bypassing row-by-row overhead. It dramatically reduces import time, especially for large ecommerce datasets like millions of Orders
.
Ensure the target table exists, the SQL Server service account can read the file path, and the file uses a consistent delimiter—typically comma-separated with CRLF line endings.
Specify table, file path, delimiter, row terminator, first row, and optional error file. Keep options readable by formatting each on a new line.
DATAFILETYPE = 'char'
treats data as character strings, FIELDTERMINATOR
sets the column separator, and ROWTERMINATOR
defines line breaks. Use FIRSTROW = 2
to skip headers.
Place customers.csv
on the SQL Server, grant READ permissions, and run the example query in the next section. Verify the row count afterward.
Always import into a staging table matching the CSV structure. Validate and cleanse, then merge into production tables using INSERT ... SELECT
with key deduplication.
Use ERRORFILE = 'C:\logs\customers_err.log'
to capture rejected rows. After import, inspect the log, correct data, and rerun only failed rows.
Create a staging table that matches the CSV order, import the data, then INSERT...SELECT into the production table with explicit column mapping.
Use KEEPNULLS
in the WITH clause to treat empty strings as NULL when appropriate.
For straightforward file-to-table loads, BULK INSERT is comparable or faster due to lower overhead. SSIS excels when complex transformations are required.