Dec 31 2009

Measuring Up

Agencies must prove the merit of their IT projects using metrics, but doing it right can be tricky.

Photography by Randall Scott
"IT project managers should go under the assumption that they don't have to start from the beginning — they just have to find what already exists," HUD's Patrick Plunkett says.

When officials at the Housing and Urban Development Department first began thinking about ways they could reduce housing subsidy fraud, they knew it would probably involve information technology. They also knew that if they wanted to undertake a costly and time-consuming IT project, they would have to find a way to prove that spending the time and money would pay off.

HUD began work on the Enterprise Income Verification system in 2003 with a cross-functional team consisting of project managers and business leaders. Their first job: Come up with metrics to make the case for EIV. Developing the metrics was particularly complex because the system had to reach into systems at other agencies, including the Social Security Administration for new hires data and the Internal Revenue Service for reported income records, as well as provide an interface for all 50 states' wage income agencies. All of the hard work paid off, and the system came online in August 2004.

The process HUD went through for the EIV system is the process every agency must go through: Craft metrics and use them to measure, by any means necessary, data that will support an IT investment, then detail this information for the Office of Management and Budget in an Exhibit 300 business case. To do this well demands that agencies decide what metrics best fit the specific project needs, establish a sound data measurement plan and analyze the data efficiently.

Although OMB does not require agencies to use specific metrics to help justify their IT projects, it does require that they fully prove the merit of those projects in the areas of finance, customer satisfaction, accuracy and efficiency.

In fact, it's exactly that point — that the Exhibit 300 sets forth no clear methodology and requires no specific set of measurement tools or criteria — that makes the process so tough. Although OMB offers guidance in its Performance Reference Model, where it provides examples of various performance measures, it's not possible to be more specific because each project is unique, requiring different metrics to prove its worth, says Doug Hubbard, president of Hubbard Decision Research Inc., a Glen Ellyn, Ill., consultancy that helps agencies develop and implement metrics.

"Depending on the project, you would need to approach it differently. Some projects might require productivity-related benefits like cost reduction and cost avoidance, near-term development and maintenance costs, and public use costs," he says. "Other projects, geared more to the public good, require metrics related to the public good. Take a project related to safer drinking water. You have to find a way to measure the public good that project will deliver, but that's really tough to measure."

Defining the Metrics

Although developing useful metrics is no easy task, many agencies have created workable methodologies and adopted useful tools to gather meaningful metrics for their IT projects.

In all cases, the first step is deciding which metrics will most accurately demonstrate the value of the IT investment. This process, more of an art than a science, works best when a cross-functional team consisting of both IT personnel and business leaders participate, usually spearheaded by the CIO Office, says Patrick Plunkett, chairman of HUD's Performance Management Community of Practice. Plunkett says he guides his project managers to develop useful performance measures by working with the business line program team. Plunkett then critiques the performance measures.

John Christian, a professor of systems management at the Information Resources Management College at the National Defense University, recommends that agencies first identify the desired outcome and move backward from there.

"Moving backward from outcome, you can understand what specific elements would provide the indicators you need," he says. "And moving backward from that, you can identity who would be responsible for providing the data to support those indicators. And finally, how to translate the proposed investment to those specific indicators to allow you to understand how well you are meeting your" goals.

Gathering the Data

Once the team has decided on the metrics it will use, the process of collecting the data begins. Often, the easiest metrics to collect and measure are those in the financial arena, such as total cost of ownership (TCO), cost avoidance, and near-term development and maintenance costs. The information needed often already exists somewhere in the organization, relieving the IT team of having to reinvent the wheel.

"IT project managers should go under the assumption that they don't have to start from the beginning — they just have to find what already exists," Plunkett says. "Sometimes there is data sitting in different parts of the organization that has been routinely collected — information about programs, beneficiaries, services levels, etc.

"For example, the CFO collects financial information, the Government Performance Results Act people collect information on major programs, and there are other independent measurement reporting processes."

But it's when capturing and measuring nonfinancial metrics — metrics such as how well your proposed system will align with your agency's enterprise architecture or how satisfied your customers are — that gathering the data becomes tricky. Often no one is gathering this information and no automated tools exist to do the job, Plunkett says. This means the agency must find a way to capture the uncapturable and measure what many regard as partially immeasurable, he says.

Part of the reason nonfinancial metrics are so difficult to capture is because the information isn't sitting in a report in the CFO's office, Hubbard says.

"Most of the really high-value metrics you need aren't just going to be calculations from passive reports you already have. They require going out into the world and making proactive observations or random samplings," he notes. "If you are trying to justify a system whose effect is reducing downtime, for example, you would really need to measure how people reallocate their time when the system is down, which takes work."

Another challenge with nonfinancial metrics is finding a workable way to measure them.

"These [metrics] can be difficult to measure, but it's important to find a way to do it, because you have to demonstrate that your project is linked all the way up to the strategic plan of the department," says Sally Good-Burton, chief of the IT Portfolio Management Division in the Interior Department's CIO Office.

Measuring hard-to-measure metrics such as accuracy and customer satisfaction requires that organizations be resourceful. Plunkett's team at HUD created "A Primer for Performance Measurement" that recommends pinpointing errors through audits, complaints, file consolidations or random discovery. Even though specific numbers often don't exist, they can be estimated given enough data, the document recommends.

Making Sense of the Metrics

Once all appropriate metrics have been collected, it's time to analyze them. For this phase, the first step generally involves creating order from the chaos by sorting the appropriate metrics using a data repository.

What Are Some Measurable Metrics?
• Alignment with your agency's enterprise architecture
• Total cost of ownership of an existing system versus a proposed system
• Customer satisfaction
• Internal productivity
• Cost avoidance
• Near-term development and maintenance costs
• Cycle times
• System availability and uptime
• User satisfaction

Many agencies, including HUD and the departments of Commerce, Education, Interior, Justice, Labor and State, use the Electronic Capital Planning and Investment Control system, a government-created Web tool that can capture and manage investment management data.

Others use available commercial applications and portfolio management tools. Many agencies ultimately also use these systems as a method of reporting the final metrics to OMB.

There are several ways to analyze the collected metrics, depending on the type of data, the project and preference of the agency.

For a simple project with easy-to-manipulate metrics, a spreadsheet might do the trick. For more complex scenarios, agencies have turned to custom-built and commercial products, including portfolio management tools, business intelligence software, performance management software and business process management software.

But in many cases, no automated tool can do as complete a job as good, old-fashioned manual evaluation, combined with institutional and topical knowledge. HUD, for example, oftentimes uses this approach to complete analysis on its IT metrics. "HUD employees use their knowledge and judgment to evaluate the performance measures," Plunkett says.

But no matter what metrics you choose or how you choose to measure them, the payoff for doing a thorough job is well worth it. In addition to the grand prize — getting OMB's nod on an Exhibit 300 — accurately measuring the worth of an IT project can result in time and cost savings, better service to the citizen, and, perhaps most important, better accountability to Congress, OMB and taxpayers, says W. Stan Boddie, a project management professor at the National Defense University.

"It's all about knowing how effective and efficient the federal government is in spending the taxpayers' dollars — that the federal government is meeting its accountability expectations — and about having a framework that is institutionalized across the enterprise, so that everyone is on the same page," Boddie says.

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT