Implementing DCIM can reduce the cost of cooling and increase the longevity of cooling units, says Richard Zanatta of the House of Representatives.

Aug 03 2012
Data Center

How DCIM Can Help Better Manage Data Centers

Agencies find that running an efficient data center is as much about power and cooling as it is about servers.

As agencies make progress toward the goals of the Federal Data Center Consolidation Initiative, they are arriving at an interesting conclusion: Running an efficient data center is as much about the systems that power and cool the room as it is about the servers themselves.

The House of Representatives learned this lesson after implementing data center infrastructure management (DCIM) tools. For eight years, the agency had been employing various methods to monitor and maintain its data center infrastructure, but after installing the ­SynapSense Active Control DCIM system, it was able to raise the temperature of its computer room air conditioning (CRAC) units from 68 to 76 degrees, resulting in a significant reduction in energy costs.

“This is more than likely the most measurable aspect of incorporating a DCIM solution, effectively reducing the cost of cooling and increasing the longevity of CRAC units,” says Richard Zanatta, director of networks and facilities in the Office of the Chief Administrative Officer, House of Representatives. “The installation and deployment of SynapSense provides our facilities management team with real-time information on current environmental conditions within the data centers and alerts as conditions change.”

Interest in DCIM tools has spiked recently because of the dramatic savings the technology can provide. “For the last 40 years, IT managers have basically had one key performance indicator,” says David Cappuccio, chief of research for IT infrastructures at Gartner, explaining that data center operators have strived for maximum uptime. The resources that data center equipment consumed was a lower priority, but DCIM tools have helped to show IT shops that underutilization of resources can be eliminated. “It’s not just about managing IT equipment,” Cappuccio adds. “It’s about the whole facility.”

Nearly 30 vendors offer sophisticated suites of DCIM solutions, Cappuccio says. These hardware and software tools can capture, report and analyze specific data from both IT equipment and facilities systems — everything from power consumption and temperature to server utilization. IT administrators at numerous agencies — among them NASA and the Energy Department’s Lawrence Berkeley National Laboratory (LBNL), as well as the House — are using these tools to track resource utilization and make adjustments to keep data centers at peak efficiency.

DCIM meters can identify redundant CRAC units, inefficient placement of floor tiles or air handlers, broken or worn fan belts, optimal temperatures and humidity levels and more. In addition to lowering energy bills, DCIM can help organizations get more out of their floor space and equipment so they may not need to build more or larger data centers, Cappuccio adds.

“Once these systems are in place,” says Geoffrey C. Bell, senior energy engineer at the Berkeley Laboratory, “IT can never do without them again.”

Step 1: Take Inventory

20%

The amount that operating expenses can be lowered with energy savings from an efficient data center

SOURCE: Gartner

After the Federal Data Center Consolidation Initiative was announced, NASA officials conducted a walkthrough of all the agency’s data centers to try to get a handle on the infrastructure and how effectively it operated. That, however, was easier said than done. NASA’s data centers are scattered throughout campuses in multiuse buildings, so it was hard to trace all the electrical grids and determine what was feeding the data centers, explains Karen Petraska, service executive for computing services at NASA.

“Because our buildings are old, I knew that this was going to be challenging,” she says. “It pretty much met my expectations.”

One thing she hadn’t anticipated: “We had a good bit of the facility metering in place, so that turned out to be a happy win for us,” Petraska says.

The trick was to figure out how to get data out of heating and air conditioning systems, chillers, power distribution units and other facilities equipment and into the IT network so the agency could work with aggregate data. “There was a learning curve,” Petraska says. The facilities team, which understood the equipment, and IT administrators, who knew data protocols, worked together with equipment manufacturers to figure it out.

Once NASA got a handle on what it had and how it worked, it announced last September that it would close 57 of its 79 data centers by 2015. The facilities and IT teams focused on maximizing the efficiency of the 22 data centers that would remain.

They’re now installing metering equipment in those data centers so they can calculate power usage effectiveness, which measures how efficiently the power in a data center is consumed by the IT infrastructure. “Then you can see the energy impact of the changes you make,” Petraska says.

Step 2: Observe Your Resources

MORE ON DCIM @

Check out our webinar at fedtechmagazine.com/812dcim.

The Berkeley Laboratory’s IT team was convinced that one of its older data centers had maxed out its capacity. “They wanted to move more and more servers into the data center, but no matter what they did, they couldn’t control the temperature,” Bell explains.

They put fans around the floor for cooling, but they were still getting warnings about overheated systems. “They were at their wit’s end,” Bell says.

About three years ago, LBNL installed a SynapSense wireless sensor network with about 800 points to measure power, temperature, under-floor air pressure and humidity. The data let them see the problem. “There was too much mixing of air,” explains Bell. The cold and warm air streams weren’t being segregated. “Most data centers have a huge air-management problem.”

The data center had cold air coming in from above and below, so the engineers redirected the cold air from above to the under-floor system. They also increased pressure under the floor, which improved distribution throughout the space, eliminating pockets of hot and cold air.

“IT thought they were running out of power and cooling, but it turned out that there was more than enough of each,” says Bill Tschudi, group leader for the high-tech buildings and industrial systems group at LBNL.

Implementing a good management system at a data center before trying to address problems allows the IT shop to see the effects of any changes it makes, Bell says. Wireless sensors provide enough flexibility that agencies can relocate them as they take out or move racks. “You just literally stick these things on the server racks,” says Bell. “It’s quite quick and quite magical.”

Step 3: Adjust Toward Efficiency

Once the lab’s wireless sensor network was in place and receiving stabilized data sets, a set of sensors showed a spike in temperature. It turned out that a technician left a service cart in front of the server rack, which was blocking the flow of cold air. “It’s immediate feedback,” Tschudi says. “It’s not guesswork.”

Monitoring the sensors was a learning experience. The IT team had contemplated putting open tiles in the floor to increase airflow to hot spots, but the sensors showed that this strategy had the opposite effect. The goal, says Tschudi, is to close tiles in overcooled areas to increase air pressure for better distribution.

The measurements also showed that turning off the humidity controls on the CRAC units barely made a difference in the humidity level in the data center, yet it cut the CRAC units’ energy usage by 28 percent.

DCIM tools also can be valuable in terms of maintenance, adds Petraska. NASA set up the system to alert administrators if something’s not working.

The fixes don’t need to be complex or expensive. Once the monitoring system was in place at the House of Representatives, all it took was some low-tech hardware, such as an air chase, to raise the temperature in the data center, says Zanatta.

“Start simple,” he advises. “Environmental monitoring first, add in power and then move into the management of copper and fiber systems.”

<p>Khue Bui</p>
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT