The problem
Some weeks ago, we received a complaint from a customer about bad PostgreSQL performance for a specific application. I took a look into the database and found strange things going on: the query planner was executing “interesting” query plans, tables were bloated with lots of dead rows (one was 6 times as big as it should be), and so on.
The cause revealed itself when looking at pg_stat_user_tables:
abc-hans=# SELECT relname, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze
FROM pg_stat_user_tables;
relname | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze
-----------------------+-------------+-----------------+--------------+------------------
object_ref | | | |
archived_current | | | |
archived_item | | | |
pgtextindex | | | |
archived_container | | | |
archived_chunk | | | |
commit_lock | | | |
archived_blob_info | | | |
archived_state | | | |
archived_blob_link | | | |
archived_item_deleted | | | |
pack_object | | | |
archived_class | | | |
archived_object | | | |
object_state | | | |
object_refs_added | | | |
blob_chunk | | | |
Despite of heavy write activity on the database, no table had ever seen autovacuum or autoanalyze. But why?
As I delved into it, I noticed that PostgreSQL’s autovacuum/autoanalyze was practically stopped in two ways by the application. I’d like to share our findings to help other programmers not to get trapped in situations like this.
Unfinished transactions
It turned out that the application had one component which connected to the database and opened a transaction right after startup, but never finished that transaction:
abc-hans=# SELECT procpid, current_timestamp - xact_start AS xact_runtime, current_query
FROM pg_stat_activity ORDER BY xact_start;
procpid | xact_runtime | current_query
---------+-----------------+-----------------------
18915 | 11:46:20.8783 | <IDLE> in transaction
21289 | 11:18:20.07042 | <IDLE> in transaction
Note that the database server was started about 11 ¾ hours ago in this example. Vacuuming (whether automatic or manual) stops at the oldest transaction id that is still in use. Otherwise it would be vacuuming active transactions, which is not sensible at all. In our example, vacuuming is stopped right away since the oldest running transaction is only one minute older than the running server instance. At least this is easy to resolve: we got the developers to fix the application. Now it finishes every transaction in a sensible amount of time with either COMMIT or ABORT.
Exclusive table locks
Unfortunately, this was not all of it: autovacuum was working now but quite sporadically. A little bit of research revealed that autovacuum will abort if it is not able to obtain a table lock within one second – and guess what: the application made quite heavy use of table locks. We found a hint that something suspicious is going on in the PostgreSQL log:
postgres[13251]: [40-1] user=,db= ERROR: canceling autovacuum task
Searching the application source brought up several places where table locks were used. Example:
stmt = """
LOCK %(table)s IN EXCLUSIVE MODE;
DELETE FROM %(table)s WHERE docid = %%s;
INSERT INTO %(table)s (docid, coefficient, marker, text_vector)
VALUES (%%s, %%s, %%s, %(clause)s)
""" % {'table': self.table, 'clause': clause}
The textindex code was particularly problematic as it dealt often with large documents. Statements like the one above could easily place load on the database server high enough to cause frequent autovacuum aborts.
The developers said that they have introduced the locks because of concurrency issues. As we could not get rid of them, I have installed a nightly cron job to force-vacuum the database. PostgreSQL has shown much improved query responses since then. Some queries’ completion times even improved by a factor of 10. I’ve been told that in the meantime they have found a way to remove the locks so the cron job is not necessary anymore.
Summary
PostgreSQL shows good auto-tuning and is a pretty low-maintenance database server if you allow it to perform its autovacuum/autoanalyze tasks regularly. We have seen that application programs may put autovacuum effectively out of business. In this particular case, unfinished transactions and extensive use of table locks were the show-stoppers. After we have identified and removed these causes, our PostgreSQL database is running smoothly again.
We are currently in the process of integrating some of the most obvious signs of trouble into the standard database monitoring on our managed hosting platform to catch those problems quickly as they show up.