LOAD DATA INFILE ingests large text files directly into a MariaDB table, bypassing row-by-row INSERTs for speed.
Bulk loading uses LOAD DATA INFILE
to stream a text file—typically CSV or TSV—straight into a table. The engine parses and writes rows in a single transaction, reaching hundreds of thousands of rows per second.
Use it for initial data migrations, nightly imports from external systems, or back-filling historical records.INSERT is fine for real-time writes; LOAD DATA INFILE
shines when the file already exists and speed matters.
See the full syntax below. Key options: LOCAL
(client file), FIELDS TERMINATED BY
, ENCLOSED BY
, IGNORE
lines, and SET
for computed columns.
Place customers.csv
on the server (or add LOCAL
) and run the example in the next section.Verify row counts with SELECT COUNT(*)
.
Represent NULL with \N
in the file or use SET col = NULLIF(@col,'')
. Columns not listed inherit table defaults.
For multi-million-row files, drop or disable secondary indexes first, load the data, then recreate indexes. This can cut total time by 70-90%.
The account needs FILE
and INSERT
privileges.Without LOCAL
, the file must reside in secure_file_priv
.
"Errcode 13
" means wrong file path or permissions. "Malformed line" indicates delimiter or quote mismatch. Use SHOW WARNINGS
after the load for row-level diagnostics.
.
Yes. Use LOAD DATA LOCAL INFILE '-'
and pipe data to the client. Make sure local_infile
is enabled on both client and server.
Wrap the load in a low-priority transaction or run during off-peak hours. Partitioning the table can also isolate write amplification.
Yes. BEFORE and AFTER INSERT triggers fire for each loaded row, so heavy logic can slow the process. Disable or simplify triggers for best throughput.