Follow up actions after the filesystem corruption incident

On 2014-06-07, the Flying Circus experienced a quite unfortunate filesystem corruption incident. Most of the VMs have been cleaned up since then, but a few defective files are still around. In the following article, I’ll provide some background information on what types of corruption we saw, what you (as our customer) can expect from platform management to rectify the situation, and what everyone can do to check his/her own applications.

Observed types of filesystem corruption

The incident resulted in lost updates on the block layer. This means that some filesystem blocks were reverted to an older state. Depending on what kind of information has been saved in the affected blocks, this may lead to different effects:

  • files show old content, as a file update got lost;
  • files show random content, as updates to the file’s extent list got lost;
  • files have disappeared completely, as updates to their containing directory got lost.

On most VMs which experienced filesystem corruption, filesystem metadata has been rendered invalid as well. We were able to identify these VMs quickly and contacted the affected customers immediately. However, there are still some cases left where filesystem metadata has not been affected (so the automated checks did not find anything), but file contents has been affected. Generally, files that have been updated in the time range between 2014-06-02 and 2014-06-07 or live in directories that saw changes during that time are at risk.

These cases of corruption are impossible to detect via filesystem checks. To make sure that all VMs are in a reasonably good state, twofold action is necessary: First, we will check the OS and all managed components as part of our platform management. Second, we ask you to take a look at your applications to uncover previously hidden cases of filesystem corruption.

Platform-wide checks

After taking short-term action to ensure that we will not run into a similar problem again, we are currently in the process of performing a deep scan of all installed OS files and managed components. In particular, we are going to:

  • perform a consistency check on all files installed from OS packages;
  • perform integrity checks on all managed databases (PostgreSQL, LDAP, …);
  • reboot all VMs to ensure that there is no stale cached content.

Found inconsistencies will be repaired automatically if possible (e.g., OS files). As far as application data is concerned (e.g., databases), we will contact you to work out available options to restore consistency. VM reboots will take place during announced maintenance periods as usual.

Application-specific checks

It is not possible to perform an automated deep check of project files, as we do for OS files. Too much context knowledge is necessary to judge on what is ok and what looks suspicious. So we ask you to throw a critical glance at your applications yourself. Our experience so far shows that signs of filesystem corruption reveal themselves quite fast when one starts to look for the right signs.

Two areas that need to be checked are static project files, like application software and configuration, and application-specific databases.

Checking application installations

Our first and most important recommendation is to restart all applications and check for obvious signs of trouble. This is both easy and points to most problems right away.

Additionally, the applications’ log files should be inspected. Filesystem corruption causes sometimes “illogical” errors to show up in the log files. We recommend to look through the log files for unusual error messages.

If corrupted installation or configuration files are found, the best way out is usually to re-deploy affected applications. This is easy if the deployment is controlled via an automated tool like batou or zc.buildout. Restoring installation files from backup is also an option.

Checking application data stores

Some applications bring in their own data store, for example ZODB or SOLR. Procedures depend on the specific software, but we can give some general suggestions here:

  • Some data stores have their own integrity checking or even repair tools on board. For example, ZODB complains about inconsistencies during packing.
  • Some data stores have means to dump their contents to an external file. During dumping, all pieces of data will be traversed and inconsistencies are likely to be discovered.
  • Some data stores can easily be rebuilt from scratch, for example caches, indexes, or session stores.

Please contact our support if you discover inconsistencies and need help to recover.

Summary

We are sorry for all the trouble the filesystem corruption incident has caused. We care about customer data and will do our best to get VMs as clean as possible. With the guidelines mentioned above, it should be possible to uncover a good portion of the corrupted files that have not been identified yet.

Leave a Reply

%d bloggers like this: