Software Quality Days 2013

SQD took place in Vienna from January 15 to 17 2013. My first impressions where: Scrum is everywhere and open source does not play a major role.

Most people I have met came from rather large organizations. They mostly face different problems than us: running one large project with multiple teams. We instead have one development team running multiple projects.

However, the topics at the conference where not too far off: we at gocept are building high quality software, too.

  • Andrea Provaglio (@andreaprovaglio) talked about Overcoming Self-organization Blocks in Agile Teams: why do we need self-organization, how can it be enabled and what can prevent it.
  • In contrast to Andrea Provaglio, who says that software was intangible, Tom Gilb (@imtomgilb) has a completely different opinion: since you can quantify anything (including software), it cannot be intangible. In his talk Quality Quantification for Quality Engineering he stated that all quality requirements need to be quantified (actually he states this for years now). When you actually what to verify if a certain quality level was reached (or not) you must quantify it.
  • Sander Hoogendoorn (@aahoogendoorn) talked about agile anti-patterns, like “Dogmagile”, “Crusader Agile”, “Scrumdementalism” and others. His views pretty much match mine: don’t be dogmatic, do what works.
  • Professional speaker Herman Scherer (@hermannscherer) held his talk Jenseits vom Mittelmaß. It is roughly about being competitive in today’s environment. Very inspiring.
  • On the bad side there was Rainer Stropek’s talk about Monitoring and Root Cause Analysis in Dynamic Cloud Infrastructures. While there was some truth in it, it was extremely high level and basically a road show for Microsoft Azure: #fail.

The conference itself was well organized. The call for papers for the SQD2014 is open until May, 31 2013. Maybe next time we can put in a breeze of small business and open-source.

News from the toolbox: gocept.selenium and our plans for its future

For a couple of years, we at gocept have been developing a Python library, gocept.selenium, whose goal it is to integrate testing web sites in real browsers with the Python unittest framework. There exist a number of approaches to doing this; when first starting real-browser tests, we opted for using selenium. Back then, it had not been integrated with webdriver yet (more on webdriver below).

There turned out to be multiple aspects to selenium integration: setting up the web server under test, starting a browser to run selenium and pointing it at the server, but also designing a wrapper around the selenium testing API to bring it in line with unittest’s way of defining specialised assertions.

We came up with the gocept.selenium package which includes both a selenese module defining such an API wrapper and a bunch of modules for integration with those web-server frameworks that we happen to use in our work, among them generic WSGI and a number of Zope-related servers. The integration mechanism is implemented in terms of test layers, so all of this requires the Zope test runner to be used. We released a 1.0 version of gocept.selenium in November 2012, marking the selenese API as stable.

The description of the package given so far already indicates two aspects that need yet to be addressed: Firstly, the selenium project is based on webdriver nowadays, with the old selenium implementation being kept for backwards compatibility at the moment. Secondly, collecting all those server integration modules in the same package that implements the actual selenium integration makes for rather complex (albeit optional) package dependencies and poses a maintainability problem.

We have dealt with the latter in December 2012, extracting all those integration modules from gocept.selenium into a new package, gocept.httpserverlayer. From the package’s documentation:
»This package provides an HTTP server for testing your application with normal HTTP clients (e.g. a real browser). This is done using test layers, which are a feature of zope.testrunner. gocept.httpserverlayer uses plone.testing for the test layer implementation, and exposes the following resources (accessible in your test case as self.layer[RESOURCE_NAME]):

  • http_host: The hostname of the HTTP server (Default: localhost)
  • http_port: The port of the HTTP server (Default: 0, which means chosen automatically by the operating system)
  • http_address: hostname:port, convenient to use in URLs (e.g. ‘http://user:password@%s/path’ % self.layer[‘http_address’])

In addition to generic WSGI and static-file serving, the server frameworks supported at this point (i.e. gocept.httpserverlayer 1.0.1) include Zope3/ZTK (both using and with the latter supporting Grok) as well as Zope2 and Plone (using ZopeTestCase, WSGI or plone.testing.z2).

After the creation of gocept.httpserverlayer, we released the 1.1 series of gocept.selenium which no longer brings its own integration code. For the sake of backwards compatibility, though, it still implements separate TestCase classes for each of the integration flavours.

This leaves webdriver support to be dealt with. Originally, we had hoped to simply sneak it in, having to change very little client code, if any at all. Our plan was to implement the old API (both for test setup and selenese) in terms of webdriver which should allow us to benefit from webdriver immediately, as some issues with the old selenium were causing trouble in our daily work (including the behaviour of type and typeKeys as well as drag-and-drop). We started a branch of gocept.selenium where we switched from integrating legacy selenium to talking to webdriver and changed the selenese implementation to use webdriver commands.

However, it turned out that a number of details couldn’t be completely hidden, and webdriver brought its own share of problems (including, sadly, new issues with drag-and-drop). We tried out our branch in a real project to the point that all tests would pass again, and ended up with a long list of upgrade notes describing incompatibilities, either temporary or not, both causing semantic differences of behaviour and necessitating changes to the test code. We identified a number of pieces of the old selenese API that we wouldn’t bother implementing, and we still had a few large projects that would help discover more things to watch out for.

It became clear that sneaking webdriver into an existing selenium test suite wasn’t the way to get to use it soon. So, instead of continuing to develop the branch and replacing the selenium-based implementation in gocept.selenium 2, we merged the branch now, in such a way that we have two different selenium integrations available at the same time, usable simultaneously in the same project. That way, new browser tests can be added using the webdriver integration layer, and existing tests can be migrated to using webdriver test case by test case, as needed.

We have made alpha releases of gocept.selenium 2 so people may experiment with the webdriver integration. Note that while the current implementation of the test layer (gocept.selenium.webdriver.Layer) contains some code to deal with Firefox, we have successfully run it against Chrome as well. While the integration layer exposes a raw webdriver object as the seleniumrc resource, there is also the WebdriverSeleneseLayer which offers a resource named selenium, which is the old selenese API implemented in terms of webdriver and can be used together with the base layer.

We are currently working towards a stable gocept.selenium 2 release that includes webdriver support at the level described, but at the same time also thinking about how our ideal testing API might be structured in order to integrate with the unittest API concepts but make better use of the object-oriented raw webdriver API than the current selenese does. If you have an interest in using webdriver in conjunction with the Python unittest framework you are very welcome to try out the current state of gocept.selenium 2 and get back to us with ideas and suggestions.

Happy new year – cleaning up the server room!

Welcome to 2013!

Alex and I are using this time of the year when most of our colleagues are still on holidays to perform maintenance on our office infrastructure.

To prepare for all the goodness we have planned for the Flying Circus in 2013 we decided to upgrade our internet connectivity (switching from painful consumer-grade DSL/SDSL connections to fibre, yeah!)  and also clean up our act in our small private server room. For that we decided to buy a better UPS and PDUs, a new rack to get some space between the servers and clean up the wiring.

Yesterday we prepared the parts we can do ourselves in preparation of the electricians coming in on Friday to start installing that nice Eaton 9355 8kVA UPS.

So, while the office was almost empty two of us managed to use our experience with the data center setups we do to turn a single rack  (pictures of which we’re too ashamed to post) into this:


Although the office was almost abandoned, those servers to serve a real purpose and we had to be careful to avoid too massive interruptions as they do handle:

  • our phone system and office door bell
  • secondary DNS for our infrastructure and customer domains
  • chat and support systems
  • monitoring with business-critical alerting

Here’s how we did it:

  • Power down all the components that are not production-related and move them from the existing rack (right one on the front picture) to the new one. For that we already had our rack logically split between “infrastructure development” and “office business” machines.
  • Move the development components (1 switch, 7 servers, 1 UPS) to the new rack. Wire everything again (nicely!) and power it up. Use the power-up cycle to verify that IPMI remote control works. Also notice which machines don’t boot cleanly (which we only found on machines that are under development regarding kernels anyway, yay).
  • Notice that the old UPS isn’t able to actually run all those servers’ load and keep one turned off until we get the new UPS installed.
  • Now that we had space in the existing rack we re-distributed the servers there as well to make the arrangement more logical (routers, switches, and other telco-related stuff at the top). Turn off single servers one-by-one and keep everyone in the office informed about short outages.
  • Install new PDUs in the space we got after removing superfluous cables. Get lots of scratches while taking stuff out and in.
  • Update our inventory databases, take pictures, write blog post.🙂

As the existing setup was quite old and grew over time we were pretty happy to be able to apply the lessons we learned in those years in between and get everything cleaned up in less than 10 hours. We notice the following things that we did differently this time (and have been doing so in the data center for a while already):

  • Create bundles of network cables for one server (we use 4) and put them  in a systematic pattern into the switch, label them once with the servername at each end. Colors indicate VLAN.
  • Use real PDUs both for IEC and Schuko equipment. Avoid consumer-grade power-distribution.
  • Leave a rack unit between each component to allow operating without hurting yourself, the flexibility to pass wires (KVM) to the front, and to avoid temperature peaks within the rack.
  • Having over-capacity makes it easier to keep things clean which in turn makes you more flexible and brings ease to your mind to focus on the important stuff.

As the pictures indicate we aren’t completely done installing all the shiny new things, so here’s what’s left for the next days and weeks:

  • Wait for the electricians and Eaton to install and activate our new UPS.
  • Wire up the new PDUs with the new UPS and clean up the power wiring for each server.
  • Wait for the telco to finish digging and putting fibre into the street and get their equipment installed so we can enjoy a “real” internet connection.

All in all, we had a very productive and happy first working day in 2013. If this pace keeps up then we should encounter the singularity sometime in April.