DuckDB vs SQLite Benchmarks: Do Analytics Workloads Really Run Faster?

Galaxy Glossary

Do DuckDB benchmarks consistently beat SQLite on analytics workloads?

DuckDB usually outperforms SQLite on analytical, read-heavy workloads because its in-process engine is column-oriented, vectorized, and optimized for scanning large datasets.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Description

Overview

DuckDB has rapidly gained mindshare as an in-process analytics database that promises the simplicity of SQLite with the speed of modern column stores. Benchmark results across open-source projects and independent blogs repeatedly show DuckDB beating SQLite—sometimes by orders of magnitude—on analytic queries. This article explores why those benchmarks look the way they do, how to interpret the numbers responsibly, and what it means for day-to-day data work.

Why DuckDB vs SQLite Matters

SQLite has dominated the embedded database space for two decades, powering everything from mobile apps to IoT sensors. Its strength is transactional OLTP: fast, single-row inserts and point lookups in a single-file, zero-config package. But modern analytics workloads—ad-hoc aggregation, joins across gigabyte-scale parquet files, window functions—demand sequential scans, heavy CPU vectorization, and late materialization. DuckDB was designed from the ground up for that style of workload while preserving SQLite’s developer-friendly no server deployment model.

Architectural Differences Driving Benchmark Results

Storage Layout

SQLite is row-oriented. Each page on disk stores complete rows. Reading one column across many rows incurs unnecessary IO.

DuckDB is column-oriented. It stores data in contiguous column segments, so a query touching two columns out of fifty reads only the data it needs.

Execution Engine

  • Vectorized processing: DuckDB processes data in CPU-cache-friendly vectors (1024 values at a time by default), enabling SIMD instructions and reducing function-call overhead.
  • Interpretive loops: SQLite still uses a row-at-a-time virtual machine model that excels at OLTP latency but yields more branch mispredictions under scan-heavy loads.

Compression & Memory Management

DuckDB applies light, adaptive compression (dictionary, bit-packing, run-length) during writes and transparently decompresses during scans. This shrinks IO and keeps more hot data in RAM.

Parallelism

SQLite is single-threaded by design (there is a compile-time option for shared cache concurrency, but it rarely helps analytic SQL). DuckDB parallelizes scans and aggregations automatically, scaling near-linearly on multi-core laptops or cloud VMs.

Dissecting Public Benchmarks

Several community benchmarks compare DuckDB and SQLite on TPC-H or custom workloads. A representative study might load the 1 GB TPC-H dataset into both engines and run the 22 TPC-H queries sequentially. Typical results:

# Pseudo-results for TPC-H Q1-Q22 (scale 1)
# Runtime in seconds (lower is better)
Query DuckDB SQLite
Q1 0.35 8.97
Q6 0.05 2.34
Q13 0.41 18.55
... ... ...
Total 5.8 167.4

The speedups cluster in two patterns:

  1. 10×-100× gains on aggregation-heavy scans where DuckDB’s vectorized columnar execution shines.
  2. 2×-5× gains on join-heavy queries where intermediate materialization matters less but parallelism still helps.

On point-lookup micro-benchmarks (e.g., “select * from orders where orderkey = 12345”), SQLite often wins because its B-tree row store is tuned for that path.

Interpreting the Numbers Responsibly

  • Data in Parquet: DuckDB can query Parquet directly without importing. Benchmarks that scan external Parquet will exaggerate DuckDB’s lead because SQLite requires an extension or ETL first.
  • Warmed caches vs cold start: DuckDB’s compression reduces memory footprint, so more data stays hot between queries.
  • Write performance: Benchmarks usually ignore insert/update throughput. If your workload is write-heavy, SQLite may remain the simpler choice.
  • Binary size: DuckDB’s compiled code (~20 MB) is larger than SQLite’s (~1 MB). For constrained devices, the trade-off matters.

Practical Example: Same Query on Both Engines

-- DuckDB (Python API)
import duckdb
con = duckdb.connect()
con.execute("CREATE TABLE orders AS SELECT * FROM parquet_scan('orders.parquet');")
result = con.execute("""
SELECT o_custkey, SUM(o_totalprice) AS total
FROM orders
WHERE o_orderdate BETWEEN DATE '1996-01-01' AND DATE '1996-12-31'
GROUP BY o_custkey
ORDER BY total DESC
LIMIT 10;
""").fetchall()
print(result)

-- SQLite (CLI)
.mode column
.read load_orders_into_sqlite.sql -- ETL step from CSV
SELECT o_custkey,
SUM(o_totalprice) AS total
FROM orders
WHERE o_orderdate BETWEEN '1996-01-01' AND '1996-12-31'
GROUP BY o_custkey
ORDER BY total DESC
LIMIT 10;

On a 1 GB dataset the DuckDB version typically returns in <200 ms on a laptop; SQLite might take several seconds.

Best Practices When Choosing Between DuckDB and SQLite

Use DuckDB When…

  • You need to run analytical queries over millions of rows.
  • Your data is already in columnar formats like Parquet.
  • You want to exploit multi-core CPUs without running a database server.
  • You perform heavy aggregations or window functions in notebooks or ETL jobs.

Use SQLite When…

  • You have transaction-heavy, single-row operations.
  • Binary size and simplicity are paramount.
  • You run on resource-constrained mobile or embedded hardware.

Common Mistakes and How to Fix Them

Mistake 1: Assuming DuckDB Is Always Faster

Point lookups, small result sets, or write-heavy workloads can favor SQLite. Benchmark your actual queries.

Mistake 2: Forgetting to Enable Parallelism

Some DuckDB clients (e.g., R) default to one thread. Call SET threads TO N; or set the DUCKDB_THREADS environment variable.

Mistake 3: Importing Data Unnecessarily

DuckDB can query Parquet/CSV directly. Copying into tables first wastes time and disk unless you need mutability.

Working with Galaxy

Because Galaxy is a modern SQL editor that supports JDBC/ODBC and in-process databases, you can point Galaxy at both DuckDB and SQLite files. Galaxy’s AI copilot auto-detects the dialect differences—e.g., date 'YYYY-MM-DD' literals in DuckDB versus ISO strings in SQLite—and suggests optimized rewrites. You can also store a DuckDB file in a shared workspace collection so teammates reproduce the benchmark queries without sending giant CSVs over Slack.

Actionable Recommendations

  1. Download both engines and run your top 5 reporting queries on a representative data sample.
  2. Measure cold-start and warm-cache performance.
  3. Consider parallelism settings and data format (native table vs Parquet scan).
  4. For production pipelines, weigh write speed, file size, and memory use alongside query latency.

Conclusion

In most analytic scenarios, DuckDB’s columnar, vectorized architecture makes it dramatically faster than SQLite. Yet the choice is workload-dependent: transactional apps still benefit from SQLite’s tiny footprint and OLTP optimizer. Understanding the design trade-offs—and benchmarking with realistic data—lets you pick the right embedded database for each job.

Why DuckDB vs SQLite Benchmarks: Do Analytics Workloads Really Run Faster? is important

Choosing the right embedded database can slash query times from minutes to milliseconds, affecting developer productivity, user experience, and infrastructure cost. Understanding benchmark results helps data engineers pick the optimal engine for dashboards or in-app analytics.

DuckDB vs SQLite Benchmarks: Do Analytics Workloads Really Run Faster? Example Usage


SELECT customer_id, SUM(total_price) AS revenue
FROM read_parquet('sales_2023.parquet')
WHERE sale_date BETWEEN DATE '2023-01-01' AND DATE '2023-12-31'
GROUP BY customer_id
ORDER BY revenue DESC
LIMIT 20;

Common Mistakes

Frequently Asked Questions (FAQs)

Is DuckDB always faster than SQLite?

No. DuckDB excels at large scans and aggregations, but SQLite may be faster for single-row lookups or write-heavy OLTP.

Can Galaxy connect to both DuckDB and SQLite?

Yes. Galaxy’s SQL editor can open local .duckdb and .sqlite files, letting you benchmark engines side by side and share queries via Collections.

How big can a DuckDB database get?

DuckDB supports terabyte-scale files on 64-bit systems, limited mainly by disk space and available RAM for caching.

Does DuckDB require a server process?

No. Like SQLite, DuckDB is embedded; it runs in-process in Python, Java, C++, or any host language.

Want to learn about other SQL terms?