Running tests using gocept.selenium on Travis-CI

Travis-CI is a free hosted continuous integration platform for the open source community. It has a good integration with Github, so each push to a project runs the tests  of the project.

gocept.selenium is a python package our company has developed as a test-friendly Python API for Selenium which allows to run tests in a browser.

Travis-CI uses YML-Files to configure the test run. I found only little documentation how to run Selenium tests on Travis-CI. But it is straight forward. The following YML file I took from a personal project of mine. (I simplified it a bit for this blog post.):

[code]
language: python
python:
– 2.6
before_install:
– "export DISPLAY=:99.0"
– "sh -e /etc/init.d/xvfb start"
– "wget http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar"
– "java -jar selenium-server-standalone-2.31.0.jar &"
– "export GOCEPT_SELENIUM_BROWSER=’*firefox’"
install:
– python bootstrap.py
– bin/buildout
script:
– bin/test
[/code]

Explanation:

  • Lines 1 – 4: My project is a Python project which currently only runs on Python 2.6. But other Python versions will work as well.
  • Lines 5, 6: Firefox needs a running XServer, so we start it first as it takes some seconds to launch. See Travis-CI documentation, too.
  • Lines  7, 8: The Selenium server seems not to be installed by default, so get it and launch it.
  • Line 9: Tell gocept.selenium to use Firefox to run the tests. (Note: To use the new Webdriver-API in the upcoming version 2 of gocept.selenium you have to set other environment variables.)
  • Lines 10 – 14: Install the project and run the tests as usual. (The example uses zc.buildout to do this.)

Note: Although I use the Firefox which is installed by default on the Travis-CI machine, I did not yet find out which version it is.

PyCon 2013 report

PyCon 2013 was an excellent conference bringing together Python’s vast, diverse, and technically excellent community. I had the opportunity to visit the whole conference including the sprint days.

Magnitude

The size of the community seems well reflected by the number of attendees that PyCon US attracts: the limit of 2,500 attendees was reached on 2013-02-02, about 1 month prior to the conference. This should be about 500 attendees more than in 2012 when they exceeded their planned capacity of 1,500 ending up with 2,000 IIRC.

It was very nice to see that the organization is growing along with the task: everything ran very smoothly, a lot of detailed changes over last year, some for better, some for worse (Remember: if you want to improve, you need to change, and the means you need to accept set backs to learn.)

Diversity

Yes, there was this FUBAR situation regarding a “code of conduct” violation. I think too many people who have not been at PyCon have contributed to the turmoil already so I’ll refrain from commenting.

I was happy to hear that PyLadies (and everybody else working on the diversity of the community) could see their efforts showing excellent results: around 20% of all participants were women (or girls). I had the impression on the first day of the conference that more women were around than usually on tech conferences.

But not only that, we also had:

  • a wide range of ages: from kids, to students, to way more senior people
  • very business and very relaxed, alternative people (Plone RV, anyone?)
  • visitors from all over the world

I had to ponder a bit why this actually makes me happy: the diversity shows me that what we do is important to everyone and does not need to be either obscure and geeky or shirt-and-tie business.

We can have a community where you can be geeky and nerdy, do business, and feel like a human being. How great is that? Conferences always tend to be very intense environments, somewhat “from outer space”. Combined with travelling overseas for almost two weeks, having a human environment just makes it so worthwhile and a bit more sustainable.

To everybody who did not have an absolutely great experience personally: I’m empathatic and I hope next PyCon will be better for you. A lot has been said about the code of conduct and the organizers definitely pay a lot of attention to it. Nevertheless: 2,500 people stuffed into a few rooms over almost a week will cause friction here and there. If I should encounter a similar situation myself I will hopefully be able to apply some of the experience as a bystander and: stay calm, be friendly, and help defusing situations.

Technical excellence

There isn’t much you can do to get more sophisticated technical people talking about programming into the same spot compared to PyCon. Maybe DEFCON, or USENIX, or other more orthogonally oriented spots. But for practicality this is just it.

I recommend you visit pyvideo.org and go through the recorded videos of all sessions. It’s always a good idea to listen to what Raymond Hettinger has to say. And Guido, of course.

Sprinting

I felt very productive during the sprints: I started out sitting in a room with Nate Aune, Jeff Forcier, and some others, talking about deployment things. I worked a bit on our deployment utility batou trying to soften some rough edges and gather feedback from others.

However, I also had the PyPI mirror client software on my radar. As we are operating one of the official mirrors (the F mirror) I was fed up by the constant breakage that the existing pep381client experienced everywhere. I sat down, refactored, and lo and behold! a bandersnatch appeared. This is a full rewrite that can be used with the existing mirror data and is much more reliable and – in the case of error – easier debug and recover.

Sponsoring

gocept also was a silver sponsor for PyCon. We already sponsored PyCon in 2012, but this year we:

  • did not insert more stuff into the attendee bag (it’s way too heavy already anyways)
  • did set up a booth to become approachable and get to talk to people

Our product (the Flying Circus) is in use for consulting clients but still on its way to become a product that you can just use by registering and providing payment details. Operations as a service is a very dynamic space today and we had some good opportunities to try to explain what we envision and where we think existing IaaS and PaaS models are aiming at the wrong thing. If you’re interested in this kind of thing, then visit our homepage and sign up to our newsletter and we’ll keep you updated.

PyCon has been a very sponsor-friendly place, especially for small businesses. It’s always a hassle to bring a lot of stuff half around the globe, but the environment was perfect to just bring some banner and flyers and talk to people strolling around.

2014

So next year, PyCon US will actually be PyCon North America, as the conferences moves to Montreal. Besides making this a much shorter trip I’m also looking forward to some new cultural impressions.

How we organize large-scale roll-outs

In the coming week we will deploy an extensive OS update to our production environment which (right now) currently consists of 41 physical hosts running 195 virtual machines.

Updates like this are prepared very carefully in many small steps using our development and staging setups that reflect the exactly same environment as our production systems in the data center.

Nevertheless, we learned to expect the unexpected when deploying to our production environment. This is why we established the one/few/many paradigm for large updates. The remainder of this post talks about our scheduling mechanism to determine which machines are updated at what point in time.

Automated maintenance scheduling

The Flying Circus configuration management database (CMDB) keeps track of times that are acceptable to each customer for scheduled maintenance. When a machine determines that a particular automated activity will be disruptive (e.g. because it makes the system temporarily unstable or reboots) then it requests a maintenance period from the CMDB based on the customers’ preferences and the estimated duration of the downtime Customers are then automatically notified what will happen at what time.

This alone is too little to make a large update that affects all machines (and thus all customers) but it’s the mechanical foundation of the next step.

Maintenance weeks

When we roll out a large update we rather add additional padding for errors and thus we invented the “maintenance week”. For this we can ask the CMDB to proactively schedule relatively large maintenance windows for all machines in a given pattern.

Here’s a short version of how this schedule is built when an administrator pushes the “Schedule maintenance week” button in our CMDB (all times in UTC):

  1. Monday 09:00 – automation management, monitoring, and binary package compilation get updated
  2. Monday 13:00 – the first router and one storage server are updated
  3. Monday 17:00 – internal test machines (our litmus machines) and a small but representative set of customer machines that are marked as test environments get updated
  4. Tuesday 17:00 – the remainder of customer test machines, up to 5% of untested production VMs, and 20% of the storage servers are updated
  5. Wednesday 17:00 – 30% of the production VMs get updated and 30% of the storage servers are updated
  6. Thursday 17:00 – the remaining production VMs and storage servers get updated
  7. Saturday 09:00 – KVM hosts are updated and rebooted
  8. Saturday 13:00 – the second router is updated

Once the schedule has been established, customers are informed by email about the assigned slots. An internal cross-check ensures that all machines in the affected location do have a window assigned for this week.

Maintenance week schedule

This procedure causes the number of machines that get updated rise from Monday (22 machines) to Thursday (about 100 machines). Any problems we find on Monday we can fix on a small number of machines and provide a bugfix to avoid the issue on later days completely.

However, if you read the list carefully you are probably asking yourself: Why are customer VMs without tests updated early? Doesn’t this force customers without tests to experience outages more heavily?

Yes. And in our opinion this is a good thing: First, in earlier phases we have smaller numbers of machines to deal with. Any breakage that occurs on Monday or Tuesday can be dealt with more timely than if unexpected breakage occurs on Wednesday or Thursday where many machines are updated at onces. Second, if your service is critical then you should feel the pain of not having tests (similar to pain that you experience if you don’t write unit tests and try to refactor). We believe that “herd immunity” will give you a false sense of security and rather have unexpected errors occur early and clearly visible so they can be approached with a good fix instead of hiding them as long as possible.

We’re looking forward to our updates next week. Obviously we’re preparing for unexpected surprises, but what will they have in stock for us this time?

We also appreciate feedback: How do you prepare updates for many machines? Is there anything we’re missing? Anything that sounds like a good idea to you? Let us know – leave a comment!

Overcoming Self-organization Blocks in Agile Teams

At the SQD2013 conference Andrea Provaglio (@andreaprovaglio) talked about the social aspects of software development. He says, software development is intangible, collaborative and heuristic while our education prepares us for linear, standardized and predictable processes. (Software development) teams are systems. Systems self-organize, if they can. (A plant is a system, and it self-organizes; it turns towards the light).

Self-organization (in a software development team) cannot be enforced, it can only be enabled. The team cannot be empowered to self-organize: Power can be given, power can be taken. The team needs the room to emancipate: emancipation cannot be taken away. Nevertheless self-organization needs leadership to align the team with the business goals and share existing constraints. Leading is different from commanding, though.

There are indicators for why self-organization doesn’t happen:

  • People build fences. They point out other’s mistakes, like to see other’s fail, take side against something or somebody, etc.
  • There is a blaming culture, mistakes are considered bad.
  • Judgement combined with superiority and and emotional quality to it.
  • Relationships between different persons are parent-child-like instead of adult-adult. If people are getting told what to do all the time, they can hardly self-organize.
  • People feel they are paddling against the river.

Watch out for those indicators.

Andrea redefines self-organization as Collective behavior that manifests itself when the individuals take personal responsibility in co-creating a shared future. I like that.

Software Quality Days 2013

SQD took place in Vienna from January 15 to 17 2013. My first impressions where: Scrum is everywhere and open source does not play a major role.

Most people I have met came from rather large organizations. They mostly face different problems than us: running one large project with multiple teams. We instead have one development team running multiple projects.

However, the topics at the conference where not too far off: we at gocept are building high quality software, too.

  • Andrea Provaglio (@andreaprovaglio) talked about Overcoming Self-organization Blocks in Agile Teams: why do we need self-organization, how can it be enabled and what can prevent it.
  • In contrast to Andrea Provaglio, who says that software was intangible, Tom Gilb (@imtomgilb) has a completely different opinion: since you can quantify anything (including software), it cannot be intangible. In his talk Quality Quantification for Quality Engineering he stated that all quality requirements need to be quantified (actually he states this for years now). When you actually what to verify if a certain quality level was reached (or not) you must quantify it.
  • Sander Hoogendoorn (@aahoogendoorn) talked about agile anti-patterns, like “Dogmagile”, “Crusader Agile”, “Scrumdementalism” and others. His views pretty much match mine: don’t be dogmatic, do what works.
  • Professional speaker Herman Scherer (@hermannscherer) held his talk Jenseits vom Mittelmaß. It is roughly about being competitive in today’s environment. Very inspiring.
  • On the bad side there was Rainer Stropek’s talk about Monitoring and Root Cause Analysis in Dynamic Cloud Infrastructures. While there was some truth in it, it was extremely high level and basically a road show for Microsoft Azure: #fail.

The conference itself was well organized. The call for papers for the SQD2014 is open until May, 31 2013. Maybe next time we can put in a breeze of small business and open-source.

News from the toolbox: gocept.selenium and our plans for its future

For a couple of years, we at gocept have been developing a Python library, gocept.selenium, whose goal it is to integrate testing web sites in real browsers with the Python unittest framework. There exist a number of approaches to doing this; when first starting real-browser tests, we opted for using selenium. Back then, it had not been integrated with webdriver yet (more on webdriver below).

There turned out to be multiple aspects to selenium integration: setting up the web server under test, starting a browser to run selenium and pointing it at the server, but also designing a wrapper around the selenium testing API to bring it in line with unittest’s way of defining specialised assertions.

We came up with the gocept.selenium package which includes both a selenese module defining such an API wrapper and a bunch of modules for integration with those web-server frameworks that we happen to use in our work, among them generic WSGI and a number of Zope-related servers. The integration mechanism is implemented in terms of test layers, so all of this requires the Zope test runner to be used. We released a 1.0 version of gocept.selenium in November 2012, marking the selenese API as stable.

The description of the package given so far already indicates two aspects that need yet to be addressed: Firstly, the selenium project is based on webdriver nowadays, with the old selenium implementation being kept for backwards compatibility at the moment. Secondly, collecting all those server integration modules in the same package that implements the actual selenium integration makes for rather complex (albeit optional) package dependencies and poses a maintainability problem.

We have dealt with the latter in December 2012, extracting all those integration modules from gocept.selenium into a new package, gocept.httpserverlayer. From the package’s documentation:
»This package provides an HTTP server for testing your application with normal HTTP clients (e.g. a real browser). This is done using test layers, which are a feature of zope.testrunner. gocept.httpserverlayer uses plone.testing for the test layer implementation, and exposes the following resources (accessible in your test case as self.layer[RESOURCE_NAME]):

  • http_host: The hostname of the HTTP server (Default: localhost)
  • http_port: The port of the HTTP server (Default: 0, which means chosen automatically by the operating system)
  • http_address: hostname:port, convenient to use in URLs (e.g. ‘http://user:password@%s/path’ % self.layer[‘http_address’])

In addition to generic WSGI and static-file serving, the server frameworks supported at this point (i.e. gocept.httpserverlayer 1.0.1) include Zope3/ZTK (both using zope.app.testing and zope.app.wsgi with the latter supporting Grok) as well as Zope2 and Plone (using ZopeTestCase, WSGI or plone.testing.z2).

After the creation of gocept.httpserverlayer, we released the 1.1 series of gocept.selenium which no longer brings its own integration code. For the sake of backwards compatibility, though, it still implements separate TestCase classes for each of the integration flavours.

This leaves webdriver support to be dealt with. Originally, we had hoped to simply sneak it in, having to change very little client code, if any at all. Our plan was to implement the old API (both for test setup and selenese) in terms of webdriver which should allow us to benefit from webdriver immediately, as some issues with the old selenium were causing trouble in our daily work (including the behaviour of type and typeKeys as well as drag-and-drop). We started a branch of gocept.selenium where we switched from integrating legacy selenium to talking to webdriver and changed the selenese implementation to use webdriver commands.

However, it turned out that a number of details couldn’t be completely hidden, and webdriver brought its own share of problems (including, sadly, new issues with drag-and-drop). We tried out our branch in a real project to the point that all tests would pass again, and ended up with a long list of upgrade notes describing incompatibilities, either temporary or not, both causing semantic differences of behaviour and necessitating changes to the test code. We identified a number of pieces of the old selenese API that we wouldn’t bother implementing, and we still had a few large projects that would help discover more things to watch out for.

It became clear that sneaking webdriver into an existing selenium test suite wasn’t the way to get to use it soon. So, instead of continuing to develop the branch and replacing the selenium-based implementation in gocept.selenium 2, we merged the branch now, in such a way that we have two different selenium integrations available at the same time, usable simultaneously in the same project. That way, new browser tests can be added using the webdriver integration layer, and existing tests can be migrated to using webdriver test case by test case, as needed.

We have made alpha releases of gocept.selenium 2 so people may experiment with the webdriver integration. Note that while the current implementation of the test layer (gocept.selenium.webdriver.Layer) contains some code to deal with Firefox, we have successfully run it against Chrome as well. While the integration layer exposes a raw webdriver object as the seleniumrc resource, there is also the WebdriverSeleneseLayer which offers a resource named selenium, which is the old selenese API implemented in terms of webdriver and can be used together with the base layer.

We are currently working towards a stable gocept.selenium 2 release that includes webdriver support at the level described, but at the same time also thinking about how our ideal testing API might be structured in order to integrate with the unittest API concepts but make better use of the object-oriented raw webdriver API than the current selenese does. If you have an interest in using webdriver in conjunction with the Python unittest framework you are very welcome to try out the current state of gocept.selenium 2 and get back to us with ideas and suggestions.

Happy new year – cleaning up the server room!

Welcome to 2013!

Alex and I are using this time of the year when most of our colleagues are still on holidays to perform maintenance on our office infrastructure.

To prepare for all the goodness we have planned for the Flying Circus in 2013 we decided to upgrade our internet connectivity (switching from painful consumer-grade DSL/SDSL connections to fibre, yeah!)  and also clean up our act in our small private server room. For that we decided to buy a better UPS and PDUs, a new rack to get some space between the servers and clean up the wiring.

Yesterday we prepared the parts we can do ourselves in preparation of the electricians coming in on Friday to start installing that nice Eaton 9355 8kVA UPS.

So, while the office was almost empty two of us managed to use our experience with the data center setups we do to turn a single rack  (pictures of which we’re too ashamed to post) into this:

 

Although the office was almost abandoned, those servers to serve a real purpose and we had to be careful to avoid too massive interruptions as they do handle:

  • our phone system and office door bell
  • secondary DNS for our infrastructure and customer domains
  • chat and support systems
  • monitoring with business-critical alerting

Here’s how we did it:

  • Power down all the components that are not production-related and move them from the existing rack (right one on the front picture) to the new one. For that we already had our rack logically split between “infrastructure development” and “office business” machines.
  • Move the development components (1 switch, 7 servers, 1 UPS) to the new rack. Wire everything again (nicely!) and power it up. Use the power-up cycle to verify that IPMI remote control works. Also notice which machines don’t boot cleanly (which we only found on machines that are under development regarding kernels anyway, yay).
  • Notice that the old UPS isn’t able to actually run all those servers’ load and keep one turned off until we get the new UPS installed.
  • Now that we had space in the existing rack we re-distributed the servers there as well to make the arrangement more logical (routers, switches, and other telco-related stuff at the top). Turn off single servers one-by-one and keep everyone in the office informed about short outages.
  • Install new PDUs in the space we got after removing superfluous cables. Get lots of scratches while taking stuff out and in.
  • Update our inventory databases, take pictures, write blog post. 🙂

As the existing setup was quite old and grew over time we were pretty happy to be able to apply the lessons we learned in those years in between and get everything cleaned up in less than 10 hours. We notice the following things that we did differently this time (and have been doing so in the data center for a while already):

  • Create bundles of network cables for one server (we use 4) and put them  in a systematic pattern into the switch, label them once with the servername at each end. Colors indicate VLAN.
  • Use real PDUs both for IEC and Schuko equipment. Avoid consumer-grade power-distribution.
  • Leave a rack unit between each component to allow operating without hurting yourself, the flexibility to pass wires (KVM) to the front, and to avoid temperature peaks within the rack.
  • Having over-capacity makes it easier to keep things clean which in turn makes you more flexible and brings ease to your mind to focus on the important stuff.

As the pictures indicate we aren’t completely done installing all the shiny new things, so here’s what’s left for the next days and weeks:

  • Wait for the electricians and Eaton to install and activate our new UPS.
  • Wire up the new PDUs with the new UPS and clean up the power wiring for each server.
  • Wait for the telco to finish digging and putting fibre into the street and get their equipment installed so we can enjoy a “real” internet connection.

All in all, we had a very productive and happy first working day in 2013. If this pace keeps up then we should encounter the singularity sometime in April.

yafowil in a Pyramid project

In a new Pyramid project we used deform to render forms. We did not really like it. (The reasons might be detailed in another post.)

To see if other form libraries do better I gave yafowil a try at our gocept Developer Punsch 3: yafowil comes with written documentation. To get a form in our Pyramid application I had to find out some things which are not so clearly documented:

  • Let the project depend on yafowil.webob via setup.py as it contains the necessary WebOb integration.
  • Import the loader from yafowil like below, to allow yafowil to register all its known components (even all the packages in the yafowil.wiget namespace). Otherwise I got strange errors. (The loader symbol is not needed at all in the rest of the code of the form.)
from yafowil import loader
  • To get a value displayed in the rendered form use value keyword parameter in the factory like this:
form['name'] = factory('field:label:text',
                       props=dict(label=u'name', required=True),
                       value=value_getter)

value can be a plain value or a function which gets the widget and the runtime data of the widget as parameters.

  • Some widgets need JavaScript-Libraries. The integration with Pyramid or Fanstatic is not part of the framework. yafowil.base.Factory.resources_for could be a starting point. (I did not do this yet, so it might be wrong.)

Conclusion: yafowil looks like an interesting framework and after getting a starting point it should be useable in Pyramid, too. Maybe this post can help to ease it a bit.

Python 2 and 3 compatible builds with zc.buildout

Creating a single-source build environment with zc.buildout that works for both Python 2 and 3 is a bit of a hassle. This blog post shows how to do it for a minimal demo project.

During the sprints at PyCon DE 2012, we tried to make the upcoming 1.0 release of the nagiosplugin library compatible with both Python 2.7 and Python 3.2. Going for a single code base (without preprocessing steps like 3to2) was no too hard. The only thing left was a single-source zc.buildout setup suited for both Python 2.7 and 3.2. It worked out at last, but currently it needs two buildout configurations. This is a little bit kludgy. I hope that things will improve in the near future so that a single-source build environment with zc.buildout will be possible.

In the following, I will demonstrate the steps with a simple demo project called MultiVersion. It contains nothing more than a single class that is supposed to run under both Python 2 and 3. There is also a unit test to verify that the code works. We use zope.testrunner to run the unit tests. The code’s functionality is irrelevant for the examples, so I left it out. You can download the full source if you are interested.

1. Use a recent enough virtualenv

Older versions of virtualenv are generally not suited since they ship with obsolete releases of distribute and pip. Check if the virtualenv included in your GNU/Linux distribution is too old. Anything below 1.8 reduces the chance of success, so better install a current virtualenv locally then. Likewise, our bootstrap.py must be recent enough to support both Python 2 and 3. The standard bootstrap.py from python-distribute.org does currently not work with Python 3.

Now we are ready to create a virtualenv in a fresh source checkout.

Python 3.2:

$ virtualenv -p python3.2 .
Running virtualenv with interpreter /usr/bin/python3.2
New python executable in ./bin/python3.2
Installing distribute.....done.
Installing pip.....done.

Python 2.7:

$ virtualenv -p python2.7 .
Running virtualenv with interpreter /usr/bin/python2.7
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools.....done.
Installing pip.....done.

2. Running buildout with Python 3.2

I will discuss the steps for Python 3.2 first, since main development will concentrate on newer Python versions. After that, I will describe the necessary steps to make the build environment backward compatible.

To run zc.buildout, we need a buildout.cfg file. I prefer to pin package versions in all projects to ensure reliable builds. As of writing this blog post, there is just an alpha release of zc.buildout that supports Python 3.2. Unfortunately, this version of zc.buildout supports Python 3.2 only, so don’t try this with Python 3.3.

My basic buildout.cfg looks like this:

[buildout]
allow-picked-versions = false
develop = .
newest = false
package = multiversion
parts = multiversion test
versions = versions

[versions]
distribute = 0.6.28
z3c.recipe.scripts = 1.0.1
zc.buildout = 2.0.0a2
zc.recipe.egg = 2.0.0a2
zc.recipe.testrunner = 1.4.0
zope.exceptions = 4.0.1
zope.interface = 4.0.1
zope.testrunner = 4.0.4

[multiversion]
recipe = zc.recipe.egg
eggs = ${buildout:package}
interpreter = py

[test]
recipe = zc.recipe.testrunner
eggs = ${buildout:package}
defaults = ['--auto-color']

In my experience, it is best to pin distutils to exactly the same version that is included in virtualenv’s support files. While differing versions are possible, they may trigger hard to find bugs since it is not always clear which version is used is which step.

I use the Python interpreter from my virtualenv’s bin directory while creating the buildout executable. This saves me from using activate/deactivate scripts which are slightly cumbersome in my opinion.

$ bin/python3.2 bootstrap.py
Creating directory 'blog-python-2-3/parts'.
Creating directory 'blog-python-2-3/develop-eggs'.
Generated script 'blog-python-2-3/bin/buildout'.

$ bin/buildout
Develop: 'blog-python-2-3/.'
Installing multiversion.
Generated interpreter 'blog-python-2-3/bin/py'.
Installing test.
Generated script 'blog-python-2-3/bin/test'.

Now we have a working build for Python 3.2:

$ bin/test
Running zope.testrunner.layer.UnitTests tests:
  Set up zope.testrunner.layer.UnitTests in 0.000 seconds.
  Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in 0.000 seconds.

3. Running buildout with Python 2.7

Unfortunately, the current zc.buildout alpha release does not work with anything except Python 3.2. Running bootstrap.py fails:

$ bin/python2.7 bootstrap.py
Getting distribution for 'zc.buildout==2.0.0a2'.
While:
  Bootstrapping.
  Getting distribution for 'zc.buildout==2.0.0a2'.
Error: Couldn't find a distribution for 'zc.buildout==2.0.0a2'.

There is no single zc.buildout distribution that fits both Python 2.7 and 3.2. To get around this, I need to create a special-case buildout.cfg that changes version pinnings for incompatible packages. Besides zc.buildout, zc.recipe.egg needs different versions for Python 2.7 and 3.2 as well.

I create buildout-2.x.cfg (slightly grumbling):

[buildout]
extends = buildout.cfg

[versions]
zc.buildout = 1.6.3
zc.recipe.egg = 1.3.2

This one does the job when used with both bootstrap and buildout:

$ bin/python2.7 bootstrap.py -c buildout-2.x.cfg
Generated script 'blog-python-2-3/bin/buildout'.

$ bin/buildout -c buildout-2.x.cfg
Develop: 'blog-python-2-3/.'
Installing multiversion.
Generated interpreter 'blog-python-2-3/bin/py'.
Installing test.
Generated script 'blog-python-2-3/bin/test'.

We now have a build environment that builds single-source code for both Python 2.7 and 3.2 using zc.buildout. Of course, this technique could be extended to support even more versions. But I hope that the incompatible packages will be updated in the near future so that the need for special-case buildout.cfg files will go away. What seems to be most missing: a release of zc.buildout that supports all major Python versions.

TL;DR

  • Use a current virtualenv version.
  • Use a compatible bootstrap.py.
  • Pin your package versions.
  • Versions for some packages (including zc.buildout) must be special-cased.

Acknowledgements

I would like to thank Andrei Chirila and Michael Howitz for a great sprint session.

Introducing the “Flying Circus”

We have been busy in the last months to improve the presentation of our hosting and operations services a lot – and if you attended the Plone Conference in Arnhem, you may have noticed some bits and pieces already: T-Shirts, nice graphics, a new logo, etc.

When pondering how to name our product we quickly decided that just using the old “gocept.net” domain wasn’t good enough. As we are also ambivalent about the whole “cloud hype” we were looking for something else: something specific, something with technology, something where people who know their trade do awesome stuff, something not for the fearsome but for people with visions and grand ideas.

What we found was this:

Image

 

We call it the “Flying Circus” – for fearless man doing exactly what is needed to boost the performance, security, and reliability of your web application!

All this is just getting started and we will show a lot more at the PyConDE next week. Or, if you can not make it there, register for more information on flyingcircus.io!