MariaDB’s slow query log captures statements that exceed a configurable time threshold so you can diagnose and optimize performance bottlenecks.
Slow queries block connections, waste CPU, and delay page loads. Capturing them lets you pinpoint the worst offenders and add indexes, rewrite joins, or cache results.
Turn on logging at runtime without a restart:
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 2; -- seconds
SET GLOBAL log_queries_not_using_indexes = 'ON';
MariaDB starts writing to the file defined by slow_query_log_file
(default: hostname-slow.log
).
Add lines to my.cnf
:
[mysqld]
slow_query_log=1
slow_query_log_file=/var/log/mysql/slow.log
long_query_time=2
log_queries_not_using_indexes=1
Use built-in tools:
# Summarize
mysqldumpslow -s c -t 10 /var/log/mysql/slow.log
# Rich analysis
pt-query-digest /var/log/mysql/slow.log
Both commands aggregate similar statements, rank by count, time, or lock time, and suggest indexes.
Run EXPLAIN
for the captured statement:
EXPLAIN SELECT p.name,
SUM(oi.quantity)
FROM OrderItems oi
JOIN Products p ON p.id = oi.product_id
GROUP BY p.name
ORDER BY SUM(oi.quantity) DESC;
Look for full table scans, missing Using index
, or high rows estimates, then add proper indexes.
long_query_time
?Start high (e.g., 2 s) to avoid noise, then lower gradually (e.g., 0.2 s) once the worst problems are fixed. Keep slow_query_log
enabled permanently in production.
Index foreign keys (customer_id
, product_id
), avoid SELECT *
, paginate results, archive historical rows, and cache aggregates.
Overhead is minimal—usually <5%—because MariaDB writes after query completion. Use async disk or lower long_query_time if concerned.
Yes. Set log_output
=TABLE and filter by command_type='Query'
in mysql.slow_log
, or post-process the file with grep -E '^# User@Host.*SELECT'
.
Use FLUSH SLOW LOGS;
or logrotate on Linux. Ensure the MariaDB user can reopen the file after rotation.