Welcome to 2013!
Alex and I are using this time of the year when most of our colleagues are still on holidays to perform maintenance on our office infrastructure.
To prepare for all the goodness we have planned for the Flying Circus in 2013 we decided to upgrade our internet connectivity (switching from painful consumer-grade DSL/SDSL connections to fibre, yeah!) and also clean up our act in our small private server room. For that we decided to buy a better UPS and PDUs, a new rack to get some space between the servers and clean up the wiring.
Yesterday we prepared the parts we can do ourselves in preparation of the electricians coming in on Friday to start installing that nice Eaton 9355 8kVA UPS.
So, while the office was almost empty two of us managed to use our experience with the data center setups we do to turn a single rack (pictures of which we’re too ashamed to post) into this:
Although the office was almost abandoned, those servers to serve a real purpose and we had to be careful to avoid too massive interruptions as they do handle:
- our phone system and office door bell
- secondary DNS for our infrastructure and customer domains
- chat and support systems
- monitoring with business-critical alerting
Here’s how we did it:
- Power down all the components that are not production-related and move them from the existing rack (right one on the front picture) to the new one. For that we already had our rack logically split between “infrastructure development” and “office business” machines.
- Move the development components (1 switch, 7 servers, 1 UPS) to the new rack. Wire everything again (nicely!) and power it up. Use the power-up cycle to verify that IPMI remote control works. Also notice which machines don’t boot cleanly (which we only found on machines that are under development regarding kernels anyway, yay).
- Notice that the old UPS isn’t able to actually run all those servers’ load and keep one turned off until we get the new UPS installed.
- Now that we had space in the existing rack we re-distributed the servers there as well to make the arrangement more logical (routers, switches, and other telco-related stuff at the top). Turn off single servers one-by-one and keep everyone in the office informed about short outages.
- Install new PDUs in the space we got after removing superfluous cables. Get lots of scratches while taking stuff out and in.
- Update our inventory databases, take pictures, write blog post.
As the existing setup was quite old and grew over time we were pretty happy to be able to apply the lessons we learned in those years in between and get everything cleaned up in less than 10 hours. We notice the following things that we did differently this time (and have been doing so in the data center for a while already):
- Create bundles of network cables for one server (we use 4) and put them in a systematic pattern into the switch, label them once with the servername at each end. Colors indicate VLAN.
- Use real PDUs both for IEC and Schuko equipment. Avoid consumer-grade power-distribution.
- Leave a rack unit between each component to allow operating without hurting yourself, the flexibility to pass wires (KVM) to the front, and to avoid temperature peaks within the rack.
- Having over-capacity makes it easier to keep things clean which in turn makes you more flexible and brings ease to your mind to focus on the important stuff.
As the pictures indicate we aren’t completely done installing all the shiny new things, so here’s what’s left for the next days and weeks:
- Wait for the electricians and Eaton to install and activate our new UPS.
- Wire up the new PDUs with the new UPS and clean up the power wiring for each server.
- Wait for the telco to finish digging and putting fibre into the street and get their equipment installed so we can enjoy a “real” internet connection.
All in all, we had a very productive and happy first working day in 2013. If this pace keeps up then we should encounter the singularity sometime in April.