Category
5 min read

How Enabling Slow Query Log Enhances Postgres Observability

In PostgreSQL, the slow query log is a feature that allows you to log queries that take longer than a specified threshold to execute. This log helps you identify and optimize queries that may be causing performance issues in your database. Let’s see how we can use it.
Published on
March 4, 2024
Share this post
Contributors
Adam Furmanek
Dev Rel
See how Metis can make your database 3x faster and 50% cheaper!

In PostgreSQL, the slow query log is a feature that allows you to log queries that take longer than a specified threshold to execute. This log helps you identify and optimize queries that may be causing performance issues in your database. Let’s see how we can use it.

Why Do We Need Observability?

Database observability is a crucial component for the maintenance and development of the database. It helps with identifying and solving issues. Observability is much more than just monitoring, though. However, to build successful observability, we need to introduce proper telemetry and monitoring everywhere in our production environment.

One of the things we start with is logging of the queries. We want to capture details of the SQL statement, metrics around execution time and consumed memory, and statistics of the tables we use. Unfortunately, many default settings in our database systems result in missing pieces of information that are crucial for debugging. One such piece is details of queries that are slow and are the most probable causes of the issues.

What Is a Slow Query Log?

The database executes many queries during the day. Some of them are very fast, and some of them may slow the database down and cause issues with other processes using the database. Ideally, we would like to identify these slow queries and examine them more to understand why they are slow. There are many reasons why queries may be slow and many techniques to optimize them. Most of these techniques focus on using the execution plan to understand what happened.

The execution plan explains what the database engine performs when executing the query. This can involve many operations like joining many tables, using indexes, sorting data, or saving it to disk temporarily. Such a plan provides all the details, however, these plans may consume a lot of space. Therefore, we don’t store them for every single query as probably most of the queries are fast and don’t need any investigation.

The slow query log is a mechanism for capturing details of queries that take too long to execute. This helps in the investigation as we capture the details at the moment when the query runs. The slow query log can be useful for identifying performance bottlenecks and optimizing slow queries to improve the overall performance of your PostgreSQL database.

Recommended reading: 8 Proven Strategies to Improve Database Performance 

How to Configure the Slow Query Log?

To enable the slow query log in PostgreSQL, we need to set a couple of parameters. Let’s see them one by one.

First, you need to enable logging with:

log_statement = 'all'

This instructs PostgreSQL to log all the syntactically correct statements. Other options are none (log nothing), ddl (log only Data Definition Language queries, i.e., queries that modify schema), mod (DDL queries and queries that modify the data, but not things like VACUUM).

It’s also worth mentioning that log_statement will not log syntactically incorrect things. We need to use log_min_error_statement to do that. Also, log_statement may log confidential information.

Another parameter logs the duration of all the completed statements:

log_duration = on

This will log the duration of all the statements. However, not all statements will have the query text (so the actual statement that was executed). To do that, we need to use another parameter:

log_min_duration_statement = 100ms

This causes logging of the duration of the statement if it ran for at least one hundred milliseconds. However, this will report the query text of the statement that was slow.

After making these changes, you need to restart PostgreSQL for the configuration to take effect.

There are additional parameters that you may configure. For instance:

log_destination = 'csvlog'

Causes the logging to a CSV file. You may want to log using different file formats.

log_filename = 'postgresql.log.%Y-%m-%d-%H'

Configures the name of the log file. This makes it easier to process the logs in an automated manner.

log_rotation_age = 60

Causes creating a new log file every sixty minutes. 

compute_query_id = 'on'

Enables in-core computation of a query identifier. We can use this identifier to find identical queries in a best-effort manner. This works starting with PostgreSQL 13.

Once we log the queries, we need to get their execution plans. We can use pg_store_plans for that. 

pg_store_plans.plan_format = 'json'

Controls what format to use when logging the execution plan.

pg_store_plans.max_plan_length = 1048576

Controls the length of the plan to store. If the plan is too long, it will get truncated. It is important to set this value high enough to store the whole execution plan.

We can also configure what is logged exactly:

pg_store_plans.log_buffers = true
pg_store_plans.log_analyze = true
pg_store_plans.log_buffers = true
pg_store_plans.log_timing = true

This should give you enough details of what happened.

What About Ephemeral Databases

Configuring your PostgreSQL is simple if your database lives for a long time. This is typically the case when you host your database in the cloud (or generally as a hosted database), or if you run it in a Docker container that is running as a service.

However, if you run PostgreSQL only for a very short period, for instance during your automated tests, then you may have no technical way of reconfiguring it. This may be the case with Testcontainers. Typically, you may run some initialization code just before your actual test suite to initialize the dependencies like storage emulators or database servers. Testcontainers takes care of running them as Docker containers. However, there is no straightforward way of restarting the container. In some languages, you may have an actual API that will handle this quite nicely, though.

An Ephemeral Database strategy allows for separating high-throughput, frequently changing data from the main database to enhance efficiency and mitigate operational risks. This approach addresses issues like query costs and system strain, with the ephemeral DB holding disposable data, thereby ensuring system stability and performance.

Similar issues may happen if you host your PostgreSQL for tests as a service in GitHub Actions. You cannot easily control the containers and restart them after applying the configuration changes.

The solution is to use a custom Docker image. Just prepare your image with the configuration file that enables the slow query log. You can then just run the container once and the configuration will be as expected.

Summary

The slow query log is a feature that allows you to log queries that take longer than a specified threshold to execute. This can significantly ease the investigation of slow queries as all the important details of the queries are already available.

This is some text inside of a div block. This is some text inside of a div block. This is some text inside of a div block. This is some text inside of a div block. This is some text inside of a div block.

Never worry about your
database again!

Start using Metis and get your database guardrails set up in minutes