Dec 31 2009

Make Sure Your EA Measures Up


Photo: John Welzenbach

You have a solidly defined enterprise architecture, and you've made it fit into the broader scheme of the Federal Enterprise Architecture. Now, you need to figure out whether that EA is delivering expected improvements to your agency.

The FEA Program Management Office says that this year the government expects to begin delivering on the promises of EA—meaning better program management and control of systems investments measured against program performance results.

That's easier said, even roughly mapped out, than done. Essentially, agencies need to create their own scorecards for EA work, something comparable to the ratings that the Office of Management and Budget hands out quarterly for the President's Management Agenda.

You don't need to reinvent the wheel. There's plenty of help out there when it comes to EA efforts and how to analyze them, in part because this business approach to IT and performance is becoming more and more common—for both public and private organizations worldwide.

Consider an EA trends report from the Institute for Enterprise Architecture Developments (www.enterprise-architecture.info). Survey respondents from 20 nations told the Netherlands institute that they work in government, finance, consulting, utilities, telecommunications, insurance and other areas.

The institute has developed a savvy scorecard approach that, interestingly, meshes quite well with how OMB is measuring government agencies' PMA efforts. It uses red, yellow and green ratings (just like the PMA) and identifies precise categories for ranking against the color scheme.

Agencies are smart to get to work now on this effort because this month the White House plans to issue an EA assessment framework against which it expects agencies to report on the status of their EA programs relative to IT investment plans and fiscal 2007 budget proposals.

So how do you measure your EA efforts? You need to do three things: identify your goals and measurement plans; set simple, straightforward metrics; and amass the data and analyze the results.

Goals and a Plan

The institute's template can help you set up a measurement plan by looking at 24 elements—six levels of information in four categories.

The categories are pretty obvious: the business at hand, the information, the information systems and the technology infrastructure. The six levels are:

  • contextual: why—a detailing of the mission and vision;
  • environmental: with who—an identification of all the players and their roles;
  • conceptual: what—the determination of the goals and objectives;
  • logical: how—the plan for reaching the objectives;
  • physical: with what—a rendering of the tools that will be necessary;
  • transformational: when—a timeline of what changes will occur when.

Most agencies probably have all this information, but perhaps not at their fingertips or in a single program or database. The ability to actually apply metrics is dependent on having access to the right information in a timely fashion so you can rate it.

Keep It Simple

Because metrics only mean something if you can use them over time to rate progress or lack of it, make sure the metrics you implement are straightforward. The program teams that will be involved must understand what data they need to gather and why—and acquiring it shouldn't be a major headache.

The Housing and Urban Development Department's Patrick Plunkett, who is chairman of the CIO Council's IT Performance Management Best Practices Community, has smart advice on how to make sure that metrics are on target. He recommends that agencies keep three best practices in mind: methodologies are helpful but require great commitment; it's crucial to keep it simple and use logic models; and performance, not the measures, must be the focus.

This jibes well with the tricolor rating scheme recommended by the institute. Once you've mapped out the details of the 24 components that will form the basis of your rating system, the institute recommends that you then use the color scheme to answer blocks of questions for each sublevel in each category. Here's how the institute defines the color levels: red means the data is unclear and equals "0" because status is unknown and not documented; yellow means the data is partially clear and equals "1" because the status is partly known and partly documented; green means the data is clear and equals "2" because the status is fully known and well documented.

Let the Measuring Begin

Once you've figured out the basis for your scorecard, then you're ready to start rating individual components of your EA. At this juncture, an agency needs to make sure that everyone gathering information and rating performance is using the same criteria and applying it consistently.

You might think this is all an exercise in futility, that you know what's going on in your infrastructure and how well your EA is living up to reality. But the interesting thing about a thorough review and analysis is that it lets you look at individual components relative to one another and weigh your initial goals against actual outcomes.

According to the Institute for Enterprise Architecture Developments, "Experiences within organizations show that enterprise architecture projects that will be reviewed are better planned, better managed and better documented."

Another thing to consider is that while you might have really good instincts, metrics let you turn those instincts into facts. And here's the real reason to do that: These facts will help you justify future demands for funding. (And the OMB chiefs who hold the purse strings have been pretty clear in saying that without facts, there won't be new funds.) The facts can let you adjust your EA and make sure that your mission, your investments, your systems and your results are in alignment.

Obviously, if your scorecard is dotted with a lot of red and yellow ratings, you'll need to go back and drill down into your EA to determine where changes are needed and how deep those changes must go within your organization. If you've done your homework and built your EA using FEA reference models, you'll be able to peel away the layers of the architecture and spot the sources of your problems.

Whether or not you use a color-coded scorecard such as the institute's or you develop your own approach, the key is to maintain and keep revisiting your metrics and then adapt your EA accordingly. After all, an EA is pointless if it just captures a snapshot of your agency's infrastructure at a given moment in time. The same can be said of any metrics you gather only once or infrequently about that EA.

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT