Use system tables and settings to detect, analyze, and optimize queries that exceed a desired execution time threshold.
Slow queries indicate inefficient SQL, insufficient indexing, or resource bottlenecks. Tracking them lets you tune schema, rewrite queries, and resize hardware before performance hurts users.
The system.query_log
and system.processes
tables record every statement. Filter by query_duration_ms
, type='QueryFinish'
, and time windows to isolate slowness.
Set log_queries = 1
and adjust log_query_threads = 1
, query_log_min_query_duration_ms
, and query_log_min_type
in config.xml
or per session.Lower thresholds capture more data.
Run a SELECT against system.query_log
filtering by event_time > today()
and query_duration_ms > 1000
. Sort by duration to surface worst offenders quickly.
Filter the log by query LIKE '%Orders% JOIN %Customers%'
.Combine with arraySum(memory_usage)
to find memory-hungry statements joining Orders
and Customers
.
Rewrite SQL to minimize full scans, pre-aggregate where possible, use appropriate table engines, and add MATERIALIZED VIEW
s for frequently requested slices. Test with EXPLAIN
and benchmark.
Create a scheduled query exporting slow-query rows to a dashboard or send alerts via system.tasks
and external scripts. Monitor trends and investigate spikes promptly.
.
Overhead is minimal when you log only finished queries over a reasonable threshold. Logging every statement can increase disk usage and CPU.
Yes. Query system.processes
and filter by elapsed > desired_ms
to see statements still executing.
Edit config.xml
(or a custom .d/
file), set the desired query_log_min_query_duration_ms
, and restart the server.