05 January 2015

John Spavin

Engineering Insight: Storage Solutions

  • One of the four recently completed data halls with the full mechanical, electrical and communications equipment installed. Photo: Gary Walker, Spark.
  • The initial stages of assembling the building, showing the kitset nature of the steel framing structural design. Photo: Gary Walker, Spark.
  • Graphical printout from the 3D building design model developed using Revit software. 1. Architectural layer of the 3D model switched on. Image: AECOM.
  • Cutaway views including sections of all the layers, architectural, structural, mechanical and electrical. Image: AECOM.

When Spark Digital, the former Telecom subsidiary Gen-i, was looking to build a new data centre in Auckland, it had a specific budget and a plan in mind.

It was going to build two: twin sisters, one in Auckland and its sibling in Wellington – so they named the project “Gemini”.

So far, they have completed and commissioned the Auckland twin with what might just be the first in a new style of data centre, brought to life, in part, by a New Zealand electrical engineer now based in Australia and working for a Fortune 500 technology and infrastructure provider, AECOM.

Murray Dickinson, AECOM’s Technical Director of High Reliability Facilities, says data centres were originally standalone mainframe computers housed in an office building. Where with the progression from mainframe to mini to PC to servers the size of each computer may have shrunk, the quantity has exploded. Data centres today are vastly different to their predecessors and now house massive numbers of individual servers and storage devices. Yet most designers still treat these spaces as a special type of office, complete with suspended ceilings, attractive walls and a pleasant colour scheme.

This is because managing directors often like to take visitors through the data centre to show off their amazing computer systems that cost such a lot of shareholder capital. To give the best impression good looks have at times dominated data centre design.

In fact, Mr Dickinson says a data centre couldn’t be further from an office block in its operation and technical needs. For a start, multiple computers generate heat, lots of it. Typically, he says, an office building generates 60 watts of heat per square metre; modern data centres generate over 6,000 watts per square metre and need something more than a domestic air conditioning system to keep their cool.

On this Gemini project though, Mr Dickinson pitched for and won the job as an industrial endeavour, for that is what he thinks data centres are. Prettiness gave way to exposed pipes, vents and structural steel, with not a pretty ceiling tile in sight. The end result looks more like a dairy factory than an office.

To accommodate the intense energy flowing around a data centre, Mr Dickinson says designers have typically installed bigger and bigger generators and coolers, while still treating the centre as if it were an office and the environment as if office staff would be working there. Under Mr Dickinson’s direction, AECOM adopted a refreshing ground-up approach to data centre design that provides several benefits. They started with a new style of building designed especially to house racks of computers. From the technical side they chose the most reliable power and cooling systems available in the market, and to reduce the price they developed a creative way to get power into the servers using much smaller cables.

Project Gemini called for a building to be constructed in a paddock – a former equestrian centre, complete with a swamp in the middle. The change of land use is apt: horsepower, which once drove economies, gave way to the new engine – binary data.

Part of the challenge for Spark is trying to guess where the market might head in five years. AECOM says iPads, for example, didn’t exist five years ago yet they’ve completely changed the way we use the Internet. Who can imagine what might arise in five years and can you accommodate that in today’s designs?

“If Spark builds a data centre today using yesterday’s design, how can it be right for tomorrow?” Mr Dickinson asks. This is where the innovation his team has developed provides Spark confidence they have the future covered.

He won’t give too much away about how the AECOM-designed centres reduce construction costs. However, he does say the process is clever but not beyond the reach of competitors— which is why AECOM is playing it close to its chest. Suffice to say, the micro-modular approach is so flexible that Mr Dickinson can alter the space and power to host the next revolution.

The technical specifications and performance of the data centre are a break from tradition too. Data centres generally vary in complexity from “Tier 1”, an office with a computer stowed in it, up to “Tier 4”, which has energy and hardware redundancy built in. AECOM’s plan was to start again and reconsider how best to safeguard the operating integrity of all those spinning disks and data.

Their first point of difference from competitors was the centre’s reliability in staying up and online. Many companies and their customers continue to use up-time as the measure of their data storage provider’s reliability. The big question for companies like Spark is, “How often will the system crash and if it does, how long will my company be idle because it can’t gain access to its records?”

A few years ago, the State Services Commission advised government departments arranging data storage and website hosting to be cautious. It warned that although 99.5 per cent up-time over a year might sound impressive (and that was commonly being offered at the time) the missing 0.5 per cent was the equivalent of 44 hours downtime per year, or 3.5 hours per month when the system would deny access to the department and the public. What’s more, the Commission warned, those 44 hours might come in one continuous block.

Rather than concentrating on reliability, therefore, Mr Dickinson says a better measure of reliability is availability.

Data engineers measure uptime by multiples of nines: 99.5 per cent might be two nines (and a bit); 99.99 per cent would be four nines and so on. Mr Dickinson says most modern data centres range from three nines (99.98 per cent) to five nines availability (99.999 per cent). For the latter, that’s still five minutes downtime each year – which may be unimportant for personal family data but potentially crippling and expensive for banks and hospitals. A five-minute outage can be a very expensive nightmare which might cost up to 10 times the original price of building the data centre. Availability is simply crucial to the centre’s success.

Part of Mr Dickinson’s success in tailoring the Gemini data storage centre to be reliable (available) was to aim for and achieve an incredible 13 nines. That’s 99.99999999999 per cent availability, down for only 0.003 milliseconds a year. Put another way, this means the centre is off for less than one second in every 300,000 years.

Mr Dickinson’s team had other innovations too, such as adapting a long-established industrial power supply to run the computers, which contributed towards the computers’ extreme reliability. They also reduced the area of land required by 25 per cent yet pushed the centre’s capacity from 350 to 400 server racks in the smaller building.

Finally, the team was able to offer a competitive price. Besides insisting that this project was a job for engineers and not technicians, they promised to build and commission the Gemini data centre for significantly less than competitors’ quotes. That’s the flat-pack principle kicking in – a game changer when traditional centres can run to $100 million or more to build.

Mr Dickinson says Spark Digital took quite a leap of faith by accepting AECOM’s proposal and now the Auckland data centre’s been designed, built and commissioned, he is keen to start the next one.