Today’s climate has significantly changed the way data centre services are delivered. The rise in mergers and acquisitions coupled with social networking, mobile, cloud and big data has meant that many providers have adapted for the digital age.
Advanced data centre design modifications and new innovations have been added by specialist providers to improve efficiencies. Many of these have been governed by standards to ensure services are designed, built and operated to be safe, reliable and of good quality.
But it’s important to point out that adhering to standards is a voluntary decision and they should be used as a guide, not followed to the letter, especially at a time when IT teams are increasingly being asked to deliver new infrastructure despite decreasing budgets.
Standards help customers compare the capabilities of one data centre to another. They can determine key factors that affect facility uptime such as levels of redundancy, fault tolerance, operational procedures and efficiency. Most importantly, standards should be used as strategic tools to reduce costs and minimise waste and errors to increase productivity.
Not all standards have been equally affected by recent rapid advances in technology. Information management and auditing standards remain relevant even if the data centre design changes. However, within data centre design, build and operational standards, there is a need to keep up with the latest in web-scale, cloud and colocation innovations.
The Open Compute Project, for example, was started by a small team of Facebook engineers. The team designed a data centre from the ground up using custom servers, power supplies, server racks and battery backup systems. As a result, the data centre uses 38% less energy and costs 24% less to do the same work. As an open project, everyone has access to the specifications, increasing the speed of innovation in the rest of the market.
The introduction of IT containers is another example. Docker containers observe a CPU peak plotted on the same scale of 25%, versus virtual machines peaking at 70%. With this knowledge, data centre overcapacity dealing with inrush and peak capacity is decreased.
An alternative is to use the standards as a base and innovate using a Monte Carlo simulation to observe behaviour at random failure of components and systems, followed by level 5 commissioning as described in the ASHRAE standards. This approach can be executed in-house on an ongoing basis compared to certification which only happens once a year at a significant cost.
Often standards can be taken too literally. Data centres designed where they have been followed to the letter can be at the expense of quality of operability and efficiency, resulting in higher building and operating costs.
It’s important that standards do not compromise efficiencies and innovations that improve performance. While standards and their alternatives should be considered, teams need to be open minded and challenge the status quo when it comes to adopting the best practices that suit their needs best.