First, a quick quiz:
• Do your data center information technology and facilities departments have separate budgets?
• Are your data center operators unable to track electricity used by each piece of equipment in real time?
• Does your data center charge users per rack or per square foot of raised
• Does your IT department buy equipment based on lowest purchase price?
If you answer “Yes” to one or more of these questions (and most agencies will), your data center suffers from perverse incentives. The result is often low server utilization, dismal cooling efficiency and a data center that costs a lot more than it needs to.
Perverse incentives occur when people don’t pay the true price for their actions. So if the IT department doesn’t benefit from buying efficient servers because the energy savings accrue only to the facilities budget, the IT folks won’t pay even an additional dollar for an energy-efficient server, even if spending that dollar would save $5 or $10 in total costs.
The solution to such problems is partly technological: Buy more sensors and computers to track all the components of total expenses, including energy and other operating outlays. But attention should be paid to the human and institutional side of the equation as well. Organizations need to restructure themselves to reflect a new goal: minimizing the total cost of delivering computing services — and that total cost needs to include infrastructure as well as IT costs.
To manage these problems, agencies can:
1 Make one person responsible for data center design, construction and operations, and have all relevant departments report to that person. Unambiguous lines of responsibility and authority clarify the mind wonderfully.
2 Create a simple and transparent model of total data center costs, so that people within the agency know what the most important metric is for success. The data center should use the total-cost model in budgeting and consider using it to charge back expenses to users when possible.
3 Specify energy-efficient IT equipment compliant with the SPECpower consumption standard
(www.spec.org/power_ssj2008) or the forthcoming Energy Star metric for servers, and be willing to spend more for IT equipment that reduces total cost in the data center.
4 Settle on lower-level metrics (such as server utilization and the site infrastructure energy overhead multiplier, also known as Power Utilization Effectiveness) and start measuring everything of importance. If you can’t measure it, you can’t manage it, so get to it!
A recent analysis that I conducted showed that for one type of data center (high performance computing for financial applications) the capital and operating costs of the power and cooling infrastructure are now close to the capital and operating costs of the IT equipment on an annualized basis.
In addition, the number of watts of IT power use per thousand dollars of IT equipment expenditure keeps going up, which means that infrastructure costs as a fraction of total data center costs will continue to rise.
That trend guarantees that not resolving institutional problems will have ever more unpleasant results — if $1 of IT department expenditure can commit an agency to spend $2 or $3 without due consideration of total costs, then the waste in both energy and dollars will continue