Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Aug 17 2012
Data Center

Pull the Plug on Excessive Data Center Costs

IT managers find that they can take advantage of new technologies and techniques to reduce the huge costs of powering and cooling their facilities.

Data centers are notorious power burners. They need electricity to fuel servers, storage systems, networking gear and all of the other associated IT gear.

But that’s just the beginning. In fact, it’s only half of the story.

Keeping the environment cool enough to run reliably is even more costly. Temperature management systems represent as much as half of most data center utility costs, says Steve Carlini, global director of data center solution marketing at Schneider Electric, a provider of data center power and cooling products.

The challenge will only increase as demand grows for additional IT resources and data center managers struggle to find the power to support them. A study released last year by a Stanford University professor estimated that energy usage grew 36 percent in U.S. data centers from 2005 to 2010.

But there’s good news: New power-saving and measurement technologies, along with maturing best practices, can help IT managers implement comprehensive strategies to better rein in energy costs.

Efficiency Goes Mainstream

A growing number of IT leaders realize that to cut costs, they must focus on their data centers’ energy bills. For example, 54 percent of IT professionals say they have or are developing programs to manage power demands in their data centers, according to the CDW-G 2012 Energy Efficient IT Report.

In addition, an impressive 75 percent of the organizations with implemented programs have seen reductions in IT energy costs. This helps explain why survey respondents reported that 32 percent of data center purchasing in the past few months can be classified as energy-efficient or environmentally oriented in some way.

Where is the money going? The survey showed that the most common energy-efficient solutions are virtualization for servers or storage, server consolidation, low-power/low-wattage processors, devices that qualify for the U.S. government’s ENERGY STAR program, power-efficient networking equipment and energy-efficient/load-shedding uninterruptible power supplies (UPS).

In time, cloud computing could play a significant role in cutting the energy costs of organizations. Sixty-two percent of the IT professionals surveyed consider cloud computing an energy-efficient approach to data center consolidation — up from 47 percent in 2010.

The survey also identified some lingering roadblocks to greener data centers. In particular, respondents noted that they need information and measurement tools to help assess energy use, potential savings and the results of their investments.

What else can IT managers do to squeeze more power savings out of today’s data centers? Industry experts advise taking a three-phase approach for greater power and cooling efficiency.

Step #1: Accurately Measure Energy Usage

Diligently monitoring power consumption rates will provide a baseline that can identify how and where to target conservation efforts. To do this, some IT organizations choose to hire third-party consultants to perform a comprehensive assessment of current usage patterns and estimate consumption growth rates over a three-year period, says Jack Pouchet, director of energy initiatives at Emerson Network Power. “Organizations can then create a plan to improve what they have today and make plans for what devices are added later.”

For ongoing insight, many IT managers are installing tools for real-time power monitoring. They can provide better information than what’s in monthly utility bill summaries, which at best provide only a “rearview mirror” look at usage patterns.

“Data center managers can start by looking at power usage rates at the UPS level as a benchmark,” Pouchet says.

For more details, organizations that are conscientious about monitoring energy usage install server-rack metering to understand how much power each rack is using. “Then, if they add five more servers to a rack, they can determine how much more power is being used,” explains David Hutchison, president of Excipio Consulting. “Organizations can also capture those metrics and track trends over time.”

This type of comprehensive data lets IT administrators and facilities managers coordinate energy strategies. The data also give administrators a broader picture of their energy needs for budget-planning purposes.

In addition, the information can serve as an early warning to alert IT managers that server racks are approaching the limits of their power-supply threshold. For example, as virtualization and blade servers enable IT departments to more densely pack servers into racks, available kilowatt capacities must keep pace, or organizations face expensive downtime.

Accurate consumption statistics can also identify whether data centers are paying for “captive power” — excess resources misallocated to one location in a facility while another is starved for electricity. One common example is a server rack that’s being fed by multiple 3-kilowatt power lines, but monitoring equipment shows the unit never pulls more than 6 kilowatts. Armed with such information, administrators could redirect one of the lines to give a second rack additional capacity without increasing the overall utility bill.

Finally, real-time monitoring can help IT managers spot underutilized servers that continue to draw high percentages of power. Decisions can then be made to decommission them or expand their workloads.

“There are still a number of organizations out there that can do more with server virtualization to improve utilization,” Hutchison says. “If every server in the data center consumes 500 watts, and some of them are sitting at 5 percent utilization, you are wasting a significant amount of power.”

By moving 30 or 40 virtual machines to a single physical host, IT administrators can push utilization rates close to 70 or 80 percent, he says. “With fewer physical servers, you free up physical space and reduce power needs.”

Additional benefits come from replacing legacy hardware with servers that meet the specifications of the U.S. government’s ENERGY STAR program, which has continually lowered power allowances. Previously, a 1U rack server could get by using 100 watts while idle.

Under today’s ENERGY STAR guidelines, it must draw only 55 watts. Therefore, replacing servers with newer models that use about half the power would quickly pay back the capital costs and then produce cost avoidance for the life of the servers.

Step #2: Target Areas for Greater Efficiency

Because cooling systems represent a large share of the energy budget, many data center managers are rethinking their cooling strategies. The traditional approach is to use air handlers that pump cold air under raised floors or into rooms at large. The problem? Cooling systems need to be set to cool entire data centers based on the temperature of the hottest rack. Obviously, that’s overkill.

Spot cooling is an option that works well with densely spaced server racks. New options combine fans and coils of piping that contain refrigerant. Together, they work to bring cool air as close as possible to heat-generating sources. IT managers can mount them wherever they’re needed most — on the tops, sides or backs of racks or above a row of servers.

Modularity is another selling point for these systems. Organizations can quickly reconfigure refrigerant piping to accommodate new equipment or redirect cooling to racks that experience heat spikes due to heavy usage.

Targeted cooling also relieves some of the energy demands associated with large under-floor fans that distribute cool air throughout data centers. The fans themselves draw high amounts of power. Instead, spot-cooling techniques use smaller fans that sit directly above racks. Because they don’t have to push air as far, they require less energy.

Modular cooling systems are especially effective when they’re paired with cold-aisle containment strategies. These containment systems enclose a row of server racks to seal in cold air and keep it where the potential for hot spots is highest rather than letting the conditioned air float off into the room at large.

“Containment prevents the mixing of air within the data center, and that significantly increases the effectiveness and the performance of whatever cooling techniques are in use,” Carlini says.

Another option is to position servers so that the hot sides of two racks face each other. Fans or containment systems can then direct the combined hot air to chimneys, where it is vented outside or passed through chillers that cool the air and recycle it back to the server rack.

Step #3: Don’t Overlook Obvious Opportunities

Although there are many power-saving innovations becoming available to data centers, don’t overlook some simple and low-cost ways to reduce expenses, advises Craig Watkins, product manager for racks and cooling systems at Tripp Lite.

For starters, he suggests that IT administrators look closely at the mass of cables grouped in the back of servers. “If your airflow is going out the back of the server, and you have a rat’s nest of cables in the back, the hot exhaust will be hitting that blockage of cables,” he points out. “The hot air will start backing up to your servers, and you won’t get the appropriate amount of free air for cooling for the servers.”

Another strategy is to raise the maximum temperature settings in the data center. Conventional wisdom previously called for setting the thermostat at 64 degrees, which consultants say no longer applies when concentrated cooling methods are in play.

“People are finally catching on that you don’t have to run your data center at 60-some degrees,” Hutchison says. “Today’s IT equipment is much more resilient and some organizations are building data centers that don’t even use cooling — they just pull air from the outside and push it through the data center.”

The right setting for each data center depends on a number of factors, such as the cooling methodology in place and the average outside air temperatures.

With the right real-time monitors, cooling techniques, containment systems and best practices, IT managers can not only identify the right temperature but also achieve it at the right price.