Household name Internet services companies such as Yahoo and Facebook often steal the show with near-unity PUEs, and I am often asked if these PUEs are attainable for everyone.
PUE (Power usage effectiveness) is a metric used to measure a data center’s energy efficiency, but it’s not meant as a data center comparison tool. Still, we’re often caught up in barstool banter about how one data center’s PUE is better than another’s. While it’s difficult to ignore the jaw-droppingly low PUEs of the Yahoos and Facebooks of the world, I can explain why they are so low and examine whether it is achievable for the average data center.
These hyper-scale providers drive change in a way that is not common in the traditional enterprise arena. They aggressively eliminate and simplify, they innovate continually, they leverage (truly) their scale, and they industrialize and automate their operations. Comparison to a typical enterprise operation is like asking me to play quarterback in the next Superbowl (trust me, it’s farfetched).
While there are many interesting cases to talk about here, I would like to touch upon just two specific reasons why these companies are able to operate at such low PUEs; free cooling and efficient IT.
PUE is the ratio of total facility load to data processing load. The more we reduce the energy used by non-data processing load, the closer our PUE gets to unity. The biggest step in that pursuit is to reduce the cooling load by maximizing the use of free cooling. These operators recognize the value of expanding the environmental envelope, and they embrace it. They take advantage of it by locating their data centers in locations that are cool for most of the year, and they use that cool air directly - in place of energy spent on artificial cooling. If you turn off your chillers, turn off your pumps and fans, imagine how much your energy use decreases.
A second factor that enables these cloud giants to be very efficient is that their IT footprint is very homogeneous and highly utilized. The data processing equipment is optimized, waste is tracked down and eliminated, and capacity is closely matched to application load. The result of this is dogged minimization of unproductive processing cycles.
So how does this apply to average data centers?
The typical enterprise does not have the option of rebuilding their data center in the remote mountainous climates, or relocating their operations staff to the hinterland. Retrofitting an urban data center for free cooling is often obstructed by building architecture or neighboring property constraints. Still, there is a benefit to using free cooling in almost every location on the planet, at least some of the time, even though you may not be able to take advantage to the same extent as the hyper-scale cloud providers. Engineering fundamentals suggest that you will not reduce your PUE much below 1.6, unless you take advantage of some sort of free cooling.
Secondly, ask yourself how optimized is my data processing footprint? Chances are that your environment is challenged by running a multiplicity of application types, with varying degrees of availability, on different types and vintages of hardware. Also, like it or not, chances are very good that you have a lot of wasted capacity in your data processing infrastructure, for which you’re paying perpetually to power and cool. Statistics indicate that almost half of all enterprises have no scheduled auditing of comatose servers. Even with servers measuring nominal levels of utilization, it’s difficult to tell if that utilization is useful, producing value to the business. Until this waste can be pruned, your effort to minimize PUE is burdened with this unnecessary baggage.
PUE can be a useful metric for governance of a data center, even beyond pure energy efficiency. A data center operator that is focused on maintaining low PUE, with an active energy management program in place, is evidence not only of a focus on energy efficiency, but also of effective service delivery within a framework of good governance, retuning value to the application owners.