Beyond Tech Measures

Knowing how fast a transaction occurs is irrelevant, if it fails to meet the programs' objectives. Solid metrics must incorporate the human factor.

You’ve done all the right things. You have invested in performance monitors to look after system stability and reliability, and to flag system failures. The information technology team has achieved 98 percent system availability for six consecutive months. Fail-safe backup and restoration programs are in place. Systems performance has improved; it now takes less than two seconds to retrieve data and display it to the user.

But your programs still seem to fall short. Expectations aren’t met and stakeholders are still not happy.

What’s going on? Federal IT organizations face the same pressures as those in the private sector. The complexity of the services that agencies manage has increased over the past decade; the number and type of devices accessing data and data services have increased exponentially and now extend well beyond the physical walls of our agencies.

Government IT managers face the same trap that plagues commercial tech chiefs when it comes to technology initiatives: They rely on metrics that no longer provide a comprehensive perspective of a project’s health or the requirements for a solution.

During a recent meeting at an agency, the IT team was incredulous as a midlevel IT manager explained to the project manager that his expectations were too high. The IT manager’s point of view was that the infrastructure was meeting specifications. Data was flowing efficiently across the network, and there were few, if any, system outages. The reliability metrics were all being met or surpassed. The IT manager was unsympathetic to the project manager’s plight.

But the project manager wanted to talk about the fact that the users of the system have their own set of metrics for defining its value. Additionally, he made the case that market conditions had changed and that the technology approach needed to be more flexible for the agency to succeed. The system’s users sensed a need but were unable to translate it into a technical specification that would have persuaded the IT manager to consider the change.

It’s as if they were literally speaking different languages.

IT and management teams need a new approach, a broader set of metrics that go beyond technical specifications and offer all team members a better perspective on the real health of technology initiatives. Federal IT managers must reach beyond the project-level owners to understand users’ challenges and needs, and then support the project owner’s articulation of those needs from the users’ point of view. IT must increase its willingness to communicate, and increase the range and level of inputs beyond the traditional technical scorecard.

Here are a few best practices for developing a framework to increase the communication among stakeholders (both internal and external):

  • Build credibility that you understand the challenge. Collate data from resources beyond your immediate domain. Encourage your stakeholders to provide data to illustrate the challenge and provide more context for the immediacy of the need.
  • Once you have meaningful data, look for ways to reframe the challenge. Build consensus for tackling the problem.
  • Determine the value to the organization. Figure out the incentive for the participants.
  • Measure progress. Once you have a clear articulation of the goal, determine how to calculate the return on investment for the agency.

What data do you need? How do you capture it? Managers should compare and contrast system metrics with those of human performance for determining priorities for addressing change, allocating budgets and informing management decisions. Most organizations already know the systems that workers access most, but rarely do they validate that knowledge against network traffic and system metrics. Ask yourself:

  • Which systems touch the staff, employees and content contributors or data entry personnel (data pipe transfer does not count — it’s a given)?
  • Which systems do the staff, employees, content contributors or data entry personnel use most frequently (again, data pipe transfer does not count)?
  • Which of these systems provide critical data and information to other agencies, businesses or citizens?

When using a system, you will need to know:

  • the top 10 tasks the majority of the staff performs in a given day, week and month to do their work;
  • the tasks that are related to serving your organization’s internal and external users;
  • how long it takes the staff to find, access and initiate the use of available tools or platform services to begin their tasks;
  • how long it takes them to do each task successfully (50 percent doesn’t count);
  • the additional information or systems they need to complete or execute each task;
  • the length of time it takes each system to provide data required to complete a task;
  • the length of time it takes and the ease of use to transfer or transform the data and information into a useable format.

If the answer to any of the time-sensitive questions is more than two minutes, you have a challenge. An important measure of usefulness is how effective a system is for a user. Federal IT managers need to know how employees perform when using a technology. A consideration is the number of staff errors, misjudgments or misinterpretations of what or how to use a system. If that error rate is higher than 10 in any given task, there’s a problem (see chart and sidebar).

 

Adding metrics that monitor human performance as a part of those used in the decision-making process to determine where to invest, how much to invest and when can sharpen IT managers’ understanding of their agencies’ needs.

Metrics that track usability as well as rate the quality of the communication, such as a ranking of the readability of written instructions for using a system, can add hours of productivity to an employee’s work week.

Human performance metrics can also help identify unforeseen pain points, help educate your IT team to broader implications for system management and improve the visibility and role such metrics have within the agency.

Moreover, the broader range of values will improve your IT department’s communication of the challenges to the broader organization and support consensus building among offices.

Agencies all have existing programs for monitoring systems. There is already plenty of data. What IT teams need is to extend the range and type of data they use to tackle management solutions. It is also the IT team’s job to involve — even empower — others to provide data inputs in the decision-making process. IT must also confirm inputs using independent sources if that will help the agency arrive at more beneficial decisions.