At NIH, Gail Williams took part in launching a pilot to test centralized power management of PCs as a way to gain energy efficiencies.
Nov 07 2008

Green by Any Means

Feds find that a host of remote-management and IP tools help them rein in their IT energy use.

If you can’t stand the heat, stay out of the … office. Then, once you’re out, use remote power management tools to improve energy efficiency and reduce heat output.

It’s easier now because several distinct but overlapping technologies let IT administrators make two energy-hogging areas — desktops and data centers — a little cooler and a lot greener. “In my office, with four computers running, it had always been 90 or 100 degrees,” says Carlo Merhi, an IT technician at the National Institutes of Health’s Office of Research Services (ORS) and Office of Research Facilities (ORF). “Now, when you walk in, you can feel it’s much cooler.”

The four PCs in Merhi’s office are among approximately 4,000 that ORS and ORF now turn off at night while prototyping a centralized power-management software, says Gail Williams, acting CIO for Research Services. If the pilot is successful, NIH will likely centralize power management of additional PCs across its Bethesda, Md., campus.

The reason is simple: Fewer hard drives spinning and monitors glowing will add up to less electricity used. “NIH is very complex,” says Williams. “What we’re trying to do is to have an enterprise tool that can be used across all our centers. Right now, we have over 20 centers that run their own IT departments.”

NIH stands to save by cutting the kilowatt-hours of energy used and would reduce carbon dioxide released into the atmosphere. Plus, it would require less electricity to cool the surrounding air.

“As energy conservation manager, this is one of the things I saw coming several years ago,” says NIH’s Greg Leifer. Leifer is an energy engineer with NIH’s Division of Engineering Services. Until now, though, most of Leifer’s energy-efficiency projects — the Energy Department credits him with saving the NIH millions of kilowatt-hours of energy since 2001 — have been confined to capital-equipment upgrades.

At the desktop, power-management software functions, in part, by centrally controlling the power settings that reside within each networked computer’s operating system. Network administrators can set policies, such as when to put a PC into a low-power sleep mode, based on usage patterns reported by client-side agents installed on each desktop. For example, IT might force typical office workers’ systems into sleep mode at 6 p.m., while excluding those of scientists dependent upon around-the-clock computing to run complex models.

Large Draw

At the data center, administrators are leveraging existing technology, including Intelligent Power Distribution Units (iPDUs); remote lights-out management (LOM) and Intelligent Platform Management Interface (IPMI) solutions; and keyboard/video/mouse-over-Internet Protocol (KVMoIP) solutions.

Recent converts include the Bureau of Land Management, which deployed an iPDU solution, and the White House’s Office of Administration, which has deployed more than 50 iPDUs in multiple locations over the past two years.

Other federal agencies and bureaus are considering their remote-power management options, as well. “The current location’s infrastructure is somewhat lacking, so we are in fact looking at our options,” says Jonathan Cho, IT program manager at the Peace Corps. “Power monitoring and management capabilities are requirements feeding into that analysis.”

Cho, like many network administrators, has used iPDUs primarily to control servers at the port level. That includes turning AC outlets on and off, scheduling power shutdowns, and when rebooting, scheduling which ports come online in what order to prevent overloads. Now, as server racks become denser and hotter, Cho and his peers are looking to use iPDUs for environmental monitoring and management. That’s because the newest iPDUs can measure humidity and temperature inside the cabinets and automatically e-mail or text-message administrators when dangerous thresholds are reached.

Administrators can then send someone onsite or, if the iPDU is connected to a separate software system, they can control cooling equipment within the data center and avert server crashes. Also, if they have multiple data centers, they can shift the electrical and data processing load from one site to another, to take advantage of the best off-peak electricity rates.

Among the manufacturers that sell iPDUs are American Power Conversion, Avocent, Raritan and Server Technology.

Lights Off, Energy Management On

Lights-out management’s primary purpose is cost-effectiveness: LOM lets administrators monitor and troubleshoot remotely, rather than sending a technician on the road. LOM solutions include a hardware component that keeps track of such variables as processor temperature and a software component that lets administrators remotely switch server power on and off, change the speed of server fans and reinstall an OS.

LOM solutions save energy indirectly by reducing the need to deploy technicians. But those same features are now being tapped to save energy directly.

fact: $18,000: Annual savings that the Federal Reserve Bank of Dallas realized
through its power management efforts

Hewlett-Packard, for example, incorporates its iLO Advanced lights-out management program into a more powerful tool, called HP Insight Control Environment. Administrators can use it to remotely monitor and regulate electrical use at the server level to improve efficiency.

Similarly, IBM Director is available with an extension called Active Energy Manager (AEM). The new software tracks energy consumption in data centers and helps customers monitor power usage and make adjustments to improve efficiency and reduce costs.

Recently, IBM and Emerson Network Power announced that they would integrate Emerson’s Liebert SiteScan software with IBM AEM.
Liebert SiteScan uses a web-based server to capture and analyze energy consumption from data center equipment.

Servers that take advantage of LOM rely on a service processor to provide di-agnostic and maintenance information. The processor employs IPMI and operates independently of the OS so that administrators can manage a system remotely, even in the absence of the OS or system management software and even if the monitored system is not powered on.

Almost every leading systems manufacturer has begun incorporating IPMI into its servers.

“The clever bit comes when infrastructure management vendors write code for appliances to translate all these [interfaces] to a common interface,” says Pierre Ketteridge, an engineer with IP Performance, a U.K. consultancy. “And then you have all the intelligent power management devices, environmental monitoring and control. The key part is unified network management and control of these disparate technologies.”

Avocent’s MergePoint 5200 Service Processor Manager was among the first such devices on the market. It allows network administrators to use one tool to manage a mix of remote diagnostic platforms, such as HP’s iLO, IBM Remote Supervisor Adapter II, Sun Advanced Lights Out Manager and IBM BladeCenter.

The Big Switch

KVMoIP switches are still used primarily to increase security. Managers can lock down equipment so that only authorized personnel can perform management operations on servers and network devices.

By allowing for remote control of a data center, fewer people need physical access to sensitive equipment. Those who enter, whether they do so in person or online, leave an auditable trail. Increasingly, KVMoIP switches are used to reduce energy demand.

KVMoIP allows control of the VGA, PS2 and USB ports via an IP Ethernet link. Among other things, a KVMoIP solution reduces the number of rack consoles and lets multiple support personnel work simultaneously on the same server from their workstations. Fewer consoles mean less demand for electricity and less heat in the data center.

At the Federal Reserve Bank of Dallas, “we are looking at an annual reduction in cost of 19 percent,” says Sam Mcgarrity, system center configuration manager specialist at Raytheon Technical Services, who has worked at the bank since 2003.

Mcgarrity achieved the savings with KVMoIP first by reducing the number of monitors and keyboards; second, by powering down equipment not used overnight; and third, by reducing cooling costs in a server room and a testing lab.

“We got everything in 1U or 2U racks,” he says. “Our cooling unit is set to 62 degrees in a 1,000-square-foot room, and with electrical monitoring by my team and the electric company, we reduced cooling costs by 12 percent and overall costs by 21 percent.” Since finishing the project in August, Mcgarrity notes that the room is fully functional and sustainable by a handful of people.

Photo: James Kegley
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT