Zope preparing to enter Python 3 wonderland

Once upon the time there was an earl named Zope II. His prophets told him that around the year 2020 suddenly his peaceful country will be devastated: They proclaim that with the “sunset” of  Python 2 as stable pillar of his country, insecurity and pain will invade his borders and hurt everyone living within. There seemed only one possible move forward to escape the disaster: Flee to the Python 3 wonderland, the source of peace and prosperity.

This was not as easy as one might think. Earl Zope II was already an old man. He was in the stable age where changes are no longer easy to achieve and he had many courtiers in his staff which he needed all the day.

The immigration authority of the Python 3 wonderland was very picky about the persons which requested permission to settle down. Many “updates” for Zope II and his staff where required to so they eventually became “compatible” with the new country. Earl Zope II was even forced to change his name to Zope IV to show hat he was ready for Python 3 wonderland.

After much work with the immigration authorities it seemed to be possible for earl Zope IV to enter; only some – but important – formalities were needed before he could be allowed to settle down and call himself a citizen of the Python 3 wonderland.

This is where the tale gets real: We need your help to release a beta version of Zope 4. The hard work seems to be done; but some polish and testing is still required to reach this goal.

We invite you to the Zope 4 Phoenix Sprint to help raising Zope 4 from the ashes! From Wednesday, 13th until Friday, 15th of September 2017 we sprint at the gocept office in Halle (Saale), Germany towards the beta release.

Possible sprint topics could be:

  • Work on issues and pull requests regarding the beta release.
  • Make RestrictedPython beta ready.
  • Work on a Bootstrap of the Zope management interface (ZMI)
  • Port CMF components to Python 3 to test Zope 4 for possible issues
  • Work on Plone to make it ready for Zope 4
  • Try out migration strategies for ZODB content to Python 3.
  • Improve the documentation.

You are heartily invited to join us for the honour of earl Zope IV.

Zope 2 Resurrection Sprint – Goal accomplished

The sprint days were really busy for Earl Zope II and the people helping him with the Python 3 wonderland immigration authorities.

  • Zope
    • can be installed using Python 3
    • can be started and renders some views
    • has more than 1.700 of more than 2.300 tests running
    • has some optional dependencies left to be ported.
  • To accomplished this by:
    • Complete porting of RestrictedPython, so a first alpha release with the new implementation was released. (This includes about 260 commits, nearly 100 files changed, 9.000 lines of newly written code and 1.000 lines of code deleted.)
    • Port AccessControl to Python 3. This port covers the Python code of the package.
    • Make an alpha release of DocumentTemplate which supports Python 3. It is purely based on Python code. (Thanks Hanno for the porting work from C to Python!)
    • Note: There were problems porting AccessControl and DocumentTemplate to PyPy so we left this out for now. (Volunteers welcome!)

Besides working on Zope there was other ongoing work:

His majesty Earl Zope II says a warm “Thank you!” to all who helped him to start his new live in Python 3 wonderland. There is still enough work to be done so he can live there and having all the comfort and stability of Python 3. See you on the next sprint!

Zope 2 Resurrection Sprint – Day 1

Welcome to the Zope 2 Resurrection Sprint in Halle (Saale), Germany. We hope you enjoyed the time since the Last call for take off to the Python 3 wonderland.

We already achieved some things:

We discussed the following topics:

  • Use pytest as test framework and test runner for the Zope projects?
    • We decided against this suggestion as it is to much hassle for a too little gain.
    • Using zope.testrunner is not too different from stdlib’s unittest.
    • zope.testrunner it has the really helpful layers feature which is heavily used in Zope and especially Plone. There is no equivalent in pytest for this concept. This would require to rewrite the whole test infrastructure or use a tool like gocept.pytestlayer which coverts layers into purest fixtures – but has its own problems doing this.
  • Improve the situation of continuous integration for the Zope packages:
    • Sometimes tests of a package break because a dependency has changed its behaviour. This does not get noticed until someone makes a change to the package which triggers Travis-CI. It is sometimes really hard to find out which change in which package caused the test failures when the tests are only run at code changes. (Current example: zope.testbrowser which broke because of a change in WebTest.)
    • Hanno activated the cron jobs Travis-CI beta feature for most of the Zope related packages. (This currently requires clicking in the Travis-CI UI and still has to be done for most of the ZTK packages.) Currently it is not clear what happens if such a cron job fails.
    • The Jenkins of the Plone Foundation is also able to test the Zope packages: it could be configured to run them on a regular basis.
    • Tres suggests to use bin/test instead of python setup.py test to run the tests as the latter one is no longer liked by the Python Packaging Authority folks.

Last call for take off to the Python 3 wonderland

Last information for Zope 2 Resurrection Sprint

We are approaching the Zope 2 Resurrection Sprint and hope that all those who are willing to help earl Zope II on his endeavor to port his realm have already prepared there horses and packed the necessary equipment to arrive in Halle (Saale), Germany.

To help with the preparations we have set up some means of communication:

Etherpad

In the Etherpad we have collected a fundamental set of obstacles that the immigration authority of Python 3 wonderland send to earl Zope II via a mounted messenger. If there are additional problems we need to solve with the immigration or other authorities, feel free to point those out in the pad.

IRC Channel

During the sprint we will have an owl waiting for messages in the #sprint channel on irc.freenode.net, so additional information and questions can be placed there.

General Schedule

In general the gates of the gocept manor in Halle (Saale) are open from 8:00 till 18:00 during the sprint for the squires to help earl Zope II. There will be some refreshments in the morning (8:00 – 9:00) and during lunch time (12:00 – 13:00) in order to keep everyone happy and content.

Apart from that, there will be some fixed points in time to meet:

  • Monday 2017-05-01
    • 19:00 CEST, pre-sprint get-together for early arrivals at Anny Kilkenny. Attention: There will be a bigger political demonstration in Halle which might impact the arrival here, take that into consideration.
  • Tuesday 2017-05-02
    • 9:00 CEST, official welcome and sprint planning afterwards.
    • 16:30-17:30 CEST, Discussion: TBD
    • 18:00 CEST, guided tour through the city of Halle, meeting point
    • 19:30 CEST, dinner and get-together at Wenzels Bierstuben, location, separate bills
  • Wednesday 2017-05-03
    • 9:00 CEST, daily meeting and review
    • 16:30-18:00 CEST, Discussion: TDB
    • 19:00 CEST, BBQ evening in the lovely garden at gocept manor
  • Thursday 2017-05-04
  • Friday 2017-05-05
    • 9:00 CEST, daily meeting and review
    • 13:00 CEST, sprint closing session with review and possibility to present lightning talks of your projects.

We are looking forward to the sprint and hope to overcome the remaining migration problems of earl Zope II.

Towards RestrictedPython 3

The biggest blocker to port Zope to Python 3 is RestrictedPython.

What is RestrictedPython?

It is a library used by Zope to restrict Python code at instruction level to a bare minimum of trusted functionality. It parses and filters the code for not allowed constructs (such as open()) and adds wrappers around each access on attributes or items. These wrappers can be used by Zope to enforce access control on objects in the ZODB without requiring manual checks in the code.

Why is RestrictedPython needed?

Zope allows writing Python code in the Zope management interface (ZMI) using a web browser (“through the web” aka TTW). This code is stored in the ZODB. The code is executed on the server. It would be dangerous to allow a user to execute arbitrary code with the rights of the web server process. That’s why the code is filtered through RestrictedPython to make sure this approach is not a complete security hole.

RestrictedPython is used in many places of Zope as part of its security model. An experiment on the Zope Resurrection Sprint showed that it would be really hard to create a Zope version which does not need RestrictedPython thus removing the TTW approach.

What is the problem porting RestrictedPython to Python 3?

RestrictedPython relies on the compiler package of the Python standard library. This package no longer exists in Python 3 because it was poorly documented, unmaintained and out of sync with the compiler Python uses itself. (There are whisperings that it was only kept because of Zope.)

Since Python 2.6 there is a new ast module in the Python standard library which is not a direct replacement for compiler. There is no documentation how to replace compiler by ast.

What is the current status?

Several people already worked on various Plone and Zope sprints and mostly in their spare time on a Python 3 branch of RestrictedPython to find out how this package works and to start porting some of its functionality as a proof of concept. It seems to be possible to use ast as the new base for RestrictedPython. Probably the external API of RestrictedPython could be kept stable. But packages using or extending some of the internals of RestrictedPython might need to be updated as well.

What are the next steps?

Many Zope and Plone packages depend on RestrictedPython directly (like AccessControl or Products.ZCatalog) or indirectly (like Products.PythonScripts, plone.app.event or even plone.app.dexterity).

When RestrictedPython has successfully been tested against these packages porting them can start. There is a nice list of all Plone 5.1 dependencies and their status regarding Python 3.

Our goal is to complete porting RestrictedPython by the end of March 2017. It opens up the possibility guiding Zope into the Python 3 wonderland by the end of 2017. This is ambitious, especially if the work is done in spare time besides the daily customer work. You can help us by either contributing PullRequests via Github or review them.

We are planning two Zope sprints in spring and autumn 2017. Furthermore we are grateful for each and every kind of support.

Zope Resurrection Part 1 – Reanimation

Now we are helping Zope in the Python 3 wonderland: Almost 20 people started with the reanimation of Zope. We are working mostly on porting important dependencies of Zope to Python 3:

Zope 4 is now per default based on WSGI. Thanks to Hanno who invested much time to make the WSGI story of Zope much more streamlined.

We found out that the ZTK (Zope Toolkit) is no longer used in any of the projects using packages of it (Zope, Grok, Bluebream – as it is dead). It should be kept to test compatibility between the packages inside the ZTK.

Reliable file updates with Python

Programs need to update files. Although most programmers know that unexpected things can happen while performing I/O, I often see code that has been written in a surprisingly naïve way. In this article, I would like to share some insights on how to improve I/O reliability in Python code.

Consider the following Python snippet. Some operation is performed on data coming from and going back into a file:

with open(filename) as f:
   input = f.read()
output = do_something(input)
with open(filename, 'w') as f:
   f.write(output)

Pretty simple? Probably not as simple as it looks at the first glance. I often debug applications that show strange behaviour on production servers. Here are examples of failure modes I have seen:

  • A run away server process spills out huge amounts of logs and the disk fills up. write() raises an exception right after truncating the file, leaving the file empty.
  • Several instances of our application happen to run in parallel. After they have finished, the file contents is garbage because it intermingles output from multiple instances.
  • The application triggers some follow-up action after completing the write. Seconds later, the power goes off. After we have restarted the server, we see the old file contents again. The data already passed to other applications does not correspond to what we see in the file anymore.

Nothing of what follows is really new. My goal is to present common approaches and techniques to Python developers who are less experienced in system programming. I will provide code examples to make it easy for developers to incorporate these approaches into their own code.

What does “reliability” mean anyway?

In the broadest sense, reliability means that an operation is performing its required function under all stated conditions. With regard to file updates, the function in question is to create, replace or extend the contents of a file. It might be rewarding to seek inspiration from database theory here. The ACID properties of the classic transaction model will serve as guidelines to improve reliability.

To get started, let’s see how the initial example can be rated against the four ACID properties:

  • Atomicity requires that a transaction either succeeds or fails completely. In the example shown above, a full disk will likely result in a partially written file. Additionally, if other programs read the file while it is being written, they get a half-finished version even in the absence of write errors.
  • Consistency denotes that updates must bring the system from one valid state to another. Consistency can be subdivided into internal and external consistency: Internal consistency means that the file’s data structures are consistent. External consistency means that the file’s contents is aligned with other data related to it. In this example, it is hard to reason about consistency since we don’t know enough about the application. But since consistency requires atomicity, we can say at least that internal consistency is not guaranteed.
  • Isolation is violated if running transactions concurrently yields different results from running the same transactions sequentially. It is clear that the code above has no protection against lost updates or other isolation failures.
  • Durability means that changes need to be permanent. Before we signal success to the user, we must be sure that our data hits non-volatile storage and not just a write cache. Perhaps the code above has been written with the assumption in mind that disk I/O takes place immediately when we call write(). This assumption is not warranted by POSIX semantics.

Use a database system if you can

If we would be able to gain all four ACID properties, we would have come a long way towards increased reliability. But this requires significant coding effort. Why reinvent the wheel? Most database systems already have ACID transactions.

Reliable data storage is a solved problem. If you need reliable storage, use a database. Chances are high that you will not do it by yourself as good as those who have been working on it for years if not decades. If you do not want to set up a “big” database server, you can use sqlite for example. It has ACID transactions, it’s small, it’s free, and it’s included in Python’s standard library.

The article could finish here. But there are valid reasons not to use a database. They are often tied to file format or file location constraints. Both are not easily controllable with database systems. Reasons include:

  • we must process files generated by other applications, which are in a fixed format or at a fixed location
  • we must write files for consumption by other applications (and the same restrictions apply)
  • our files must be human-readable or human-editable

…and so on. You get the point.

If we are set out to implement reliable file updates on our own, there are some programming techniques to consider. In the following, I will present four common patterns of performing file updates. After that, I will discuss what steps can be taken to establish ACID properties with each file update pattern.

File update patterns

Files can be updated in a multitude of ways, but I see at least four common patterns. These will serve as a basis for the rest of this article.

Truncate-Write

This is probably the most basic pattern. In the following example, hypothetical domain model code reads data, performs some computation, and re-opens the existing file in write mode:

with open(filename, 'r') as f:
   model.read(f)
model.process()
with open(filename, 'w') as f:
   model.write(f)

A variant of this pattern opens the file in read-write mode (the “plus” modes in Python), seeks to the start, issues an explicit truncate() call and rewrites the contents:

with open(filename, 'a+') as f:
   f.seek(0)
   model.input(f.read())
   model.compute()
   f.seek(0)
   f.truncate()
   f.write(model.output())

An advantage of this variant is that we open file only once and keep it open all the time. This simplifies locking for example.

Write-Replace

Another widely used pattern is to write new contents into a temporary file and replace the original file after that:

with tempfile.NamedTemporaryFile(
      'w', dir=os.path.dirname(filename), delete=False) as tf:
   tf.write(model.output())
   tempname = tf.name
os.rename(tempname, filename)

This method is more robust against errors than the truncate-write method. See below for a discussion of atomicity and consistency properties. It is used by many applications.

These first two patterns are so common that the ext4 filesystem in the Linux kernel even detects them and fixes some reliability shortcomings automatically. But don’t depend on it: you are not always using ext4, and the administrator might have disabled this feature.

Append

The third pattern is to append new data to an existing file:

with open(filename, 'a') as f:
   f.write(model.output())

This pattern is used for writing log files and other cumulative data processing tasks. Technically, its outstanding feature is its extreme simplicity. An interesting extension is to perform append-only updates during regular operation and to reorganize the file into a more compact form periodically.

Spooldir

Here we treat a directory as logical data store and create a new uniquely named file for each record:

with open(unique_filename(), 'w') as f:
   f.write(model.output())

This pattern shares its cumulative nature with the append pattern. A big advantage is that we can put a little amount of metadata into the file name. This can be used, for example, to convey information about the processing status. A particular clever implementation of the spooldir pattern is the maildir format. Maildirs use a naming scheme with additional subdirectories to perform update operations in a reliable and lock-free way. The md and gocept.filestore libraries provide convenient wrappers for maildir operations.

If your file name generation is not guaranteed to give unique results, there is even a possibility to demand that the file must be actually new. Use the low-level os.open() call with proper flags:

fd = os.open(filename, os.O_WRONLY | os.O_CREAT| os.O_EXCL, 0o666)
with os.fdopen(fd, 'w') as f:
   f.write(...)

After opening the file with O_EXCL, we use os.fdopen to convert the raw file descriptor into a regular Python file object.

Applying ACID properties to file updates

In the following, I will try to enhance the file update patterns. Let’s see what we can do to meet each ACID property in turn. I will keep this as simple as possible, since we are not planning to write a complete database system. Please note that the material presented in this section is not exhaustive, but it may give you a good starting point for your own experimentation.

Atomicity

The write-replace pattern gives you atomicity for free since the underlying os.rename() function is atomic. This means that at any given point in time, any process sees either the old or the new file. This pattern has a natural robustness against write errors: if the write operation triggers an exception, the rename operation is never performed and thus, we are not in the danger of overwriting a good old file with a damaged new one.

The append patterns is not atomic by itself, because we risk to append incomplete records. But there is a trick to make updates appear atomic: Annotate each written record with a checksum. When reading the log later on, discard all records that do not have a valid checksum. This way, only complete records will be processed. In the following example, an application makes periodic measurements and appends a one-line JSON record each time to a log. We compute a CRC32 checksum of the record’s byte representation and append it to the same line:

with open(logfile, 'ab') as f:
    for i in range(3):
        measure = {'timestamp': time.time(), 'value': random.random()}
        record = json.dumps(measure).encode()
        checksum = '{:8x}'.format(zlib.crc32(record)).encode()
        f.write(record + b' ' + checksum + b'\n')

This example code simulates the measurements by creating a random value every second.

$ cat log
{"timestamp": 1373396987.258189, "value": 0.9360123151217828} 9495b87a
{"timestamp": 1373396987.25825, "value": 0.40429005476999424} 149afc22
{"timestamp": 1373396987.258291, "value": 0.232021160265939} d229d937

To process the log file, we read one record per line, split off the checksum, and compare it to the read record:

with open(logfile, 'rb') as f:
    for line in f:
        record, checksum = line.strip().rsplit(b' ', 1)
        if checksum.decode() == '{:8x}'.format(zlib.crc32(record)):
            print('read measure: {}'.format(json.loads(record.decode())))
        else:
            print('checksum error for record {}'.format(record))

Now we simulate a truncated write by chopping the last line:

$ cat log
{"timestamp": 1373396987.258189, "value": 0.9360123151217828} 9495b87a
{"timestamp": 1373396987.25825, "value": 0.40429005476999424} 149afc22
{"timestamp": 1373396987.258291, "value": 0.23202

When the log is read, the last incomplete line is rejected:

$ read_checksummed_log.py log
read measure: {'timestamp': 1373396987.258189, 'value': 0.9360123151217828}
read measure: {'timestamp': 1373396987.25825, 'value': 0.40429005476999424}
checksum error for record b'{"timestamp": 1373396987.258291, "value":'

The checksummed log record approach is used by a large number of applications including many database systems.

Individual files in the spooldir can likewise feature a checksum in each file. Another, probably easier, approach is to borrow from the write-replace pattern: first write the file aside and move it to its final location afterwards. Devise a naming scheme that protects work-in-progress files from being processed by consumers. In the following example, all file names ending with .tmp are ignored by readers and are thus safe to use during write operations:

newfile = generate_id()
with open(newfile + '.tmp', 'w') as f:
   f.write(model.output())
os.rename(newfile + '.tmp', newfile)

At last, truncate-write is non-atomic. I am sorry that I am not able to offer you an atomic variant. Right after performing the truncate operation, the file is nulled and no new content has been written yet. If a concurrent program reads the file now or, worse yet, an exception occurs and our program gets aborted, we see neither the old nor the new version.

Consistency

Most things I have said about atomicity can be applied to consistency as well. In fact, atomic updates are a prerequisite for internal consistency. External consistency means to update several files in sync. As this cannot easily be done, lock files can be used to ensure that read and write access do not interfere. Consider a directory where files need to be consistent with each other. A common pattern is to designate a lock file, which controls access for the whole directory.

Example writer code:

with open(os.path.join(dirname, '.lock'), 'a+') as lockfile:
   fcntl.flock(lockfile, fcntl.LOCK_EX)
   model.update(dirname)

Example reader code:

with open(os.path.join(dirname, '.lock'), 'a+') as lockfile:
   fcntl.flock(lockfile, fcntl.LOCK_SH)
   model.readall(dirname)

This method only works if we have control over all readers. Since there may be only one writer active at a time (the exclusive lock is blocking all shared locks), the scalability of this method is limited.

To take it one step further, we can apply the write-replace pattern to whole directories. This involves creating a new directory for each update generation and changing a symlink once the update is complete. For example, a mirroring application maintains a directory of tarballs together with an index file, which lists file name, file size, and a checksum. When the upstream mirror gets updated, it is not enough to implement an atomic file update for every tarball and the index file in isolation. Instead, we need to flip both the tarballs and the index file at the same time to avoid checksum mismatches. To solve this problem, we maintain a subdirectory for each generation and symlink the active generation:

mirror
|-- 483
|   |-- a.tgz
|   |-- b.tgz
|   `-- index.json
|-- 484
|   |-- a.tgz
|   |-- b.tgz
|   |-- c.tgz
|   `-- index.json
`-- current -> 483

Here, the new generation 484 is in the process of being updated. When all tarballs are present and the index file is up to date, we can switch the current symlink with a single, atomic os.symlink() call. Other applications see always either the complete old or the complete new generation. It is important that readers need to os.chdir() into the current directory and refer to files without their full path names. Otherwise, there is a race condition when a reader first opens current/index.json and then opens current/a.tgz, but in the meanwhile the symlink target has been changed.

Isolation

Isolation means that concurrent updates to the same file are serializable — there exists a serial schedule that gives the same results as the parallel schedule actually performed. “Real” database systems use advanced techniques like MVCC to maintain serializability while allowing for a great degree of parallelism. Back on our own, we better use locks to serialize file updates.

Locking truncate-write updates is easy. Just acquire an exclusive lock prior to all file operations. The following example code reads an integer from a file, increments it, and updates the file:

def update():
   with open(filename, 'r+') as f:
      fcntl.flock(f, fcntl.LOCK_EX)
      n = int(f.read())
      n += 1
      f.seek(0)
      f.truncate()
      f.write('{}\n'.format(n))

Locking updates using the write-replace pattern can be tricky. Using a lock the same way as in truncate-write can lead to updates conflicts. A naïve implementation could look like this:

def update():
   with open(filename) as f:
      fcntl.flock(f, fcntl.LOCK_EX)
      n = int(f.read())
      n += 1
      with tempfile.NamedTemporaryFile(
            'w', dir=os.path.dirname(filename), delete=False) as tf:
         tf.write('{}\n'.format(n))
         tempname = tf.name
      os.rename(tempname, filename)

What is wrong with this code? Imagine two processes compete to update a file. The first process just goes ahead, but the second process is blocked in the fcntl.flock() call. When the first process replaces the file and releases the lock, the already open file descriptor in the second process now points to a “ghost” file (not reachable by any path name) with old contents. To avoid this conflict, we must check that our open file is still the same after returning from fcntl.flock(). So I have written a new LockedOpen context manager to replace the built-in open context. It ensures that we actually open the right file:

class LockedOpen(object):

    def __init__(self, filename, *args, **kwargs):
        self.filename = filename
        self.open_args = args
        self.open_kwargs = kwargs
        self.fileobj = None

    def __enter__(self):
        f = open(self.filename, *self.open_args, **self.open_kwargs)
        while True:
            fcntl.flock(f, fcntl.LOCK_EX)
            fnew = open(self.filename, *self.open_args, **self.open_kwargs)
            if os.path.sameopenfile(f.fileno(), fnew.fileno()):
                fnew.close()
                break
            else:
                f.close()
                f = fnew
        self.fileobj = f
        return f

    def __exit__(self, _exc_type, _exc_value, _traceback):
        self.fileobj.close()
    def update(self):
        with LockedOpen(filename, 'r+') as f:
            n = int(f.read())
            n += 1
            with tempfile.NamedTemporaryFile(
                    'w', dir=os.path.dirname(filename), delete=False) as tf:
                tf.write('{}\n'.format(n))
                tempname = tf.name
            os.rename(tempname, filename)

Locking append updates is as easy as locking truncate-write updates: acquire an exclusive lock, append, done. Long-running processes, which leave a file permanently open, may need to release locks between updates to let others in.

The spooldir pattern has the elegant property that it does not require any locking. Again, it depends on using a clever naming scheme and a robust unique file name generation. The maildir specification is a good example for a spooldir design. It can be easily adapted to other cases, which have nothing to do with mail.

Durability

Durability is a bit special because it depends not only on the application, but also on OS and hardware configuration. In theory, we can assume that os.fsync() or os.fdatasync() calls do not return until data has reached permanent storage. In practice, we may run into several problems: we may be facing incomplete fsync implementations or awkward disk controller configurations, which never give any persistence guarantee. A talk from a MySQL dev goes into great detail of what can go wrong. Some database systems like PostgreSQL even offer a choice of persistence mechanisms so that the administrator can select the best suited one at runtime. The poor man’s option although is to just use os.fsync() and hope that it has been implemented correctly.

With the truncate-write pattern, we have to issue an fsync after finishing write operations but before closing the file. Note that there is usually another level of write caching involved. The glibc buffer holds back writes inside the process even before they are passed to the kernel. To get the glibc buffer empty as well, we have to flush() it before fsync’ing:

with open(filename, 'w') as f:
   model.write(f)
   f.flush()
   os.fdatasync(f)

Alternatively, you can invoke Python with the -u flag to get unbuffered writes for all file I/O.

I prefer os.fdatasync() over os.fsync() most of the time to avoid synchronous metadata updates (ownership, size, mtime, …). Metadata updates can result in seeky disk I/O, which slows things down quite a bit.

Applying the same trick to write-replace style updates is only half of the story. We make sure that the newly written file has been pushed to non-volatile storage before replacing the old file, but what about the replace operation itself? We have no guarantee that the directory update is performed right on. There are lengthy discussions on how to sync a directory update on the net, but in our case (old and new file are in the same directory) we can get away with this rather simple solution:

os.rename(tempname, filename)
dirfd = os.open(os.path.dirname(filename), os.O_DIRECTORY)
os.fsync(dirfd)
os.close(dirfd)

We open the directory with the low-level os.open() call (Python’s built-in open() does not support opening directories) and perform a os.fsync() on the directory’s file descriptor.

Persisting append updates is again quite similar to what I have said about truncate-write.

The spooldir pattern has the same directory sync problems as the write-replace pattern. Fortunately, the same solution applies here as well: first sync the file, then sync the directory.

Conclusion

It is possible to update files reliably. I have shown that all four ACID properties can be met. The code examples presented above may serve as a toolbox. Pick the programming techniques that match your needs best. At times, you don’t need all four ACID properties but only one or two. I hope that this article helps you to make an informed decision about what to implement and what to leave out.