While the IC’s research organization looks into adding security to cloud environments, in the here and now, intelligence agencies are sharing more data.
Many government agencies spend considerable time, resources and money on usability testing — and with good reason. Usability testing is a critical qualitative step for identifying how to make government Web sites more user-friendly and effective.
But usability testing in and of itself doesn’t deliver maximum value. Why? Because its strength is its weakness: The narrow lab environment that lets Web managers solve usability problems doesn’t adequately represent the diverse population of citizens who use a Web site.
For a Web site to be citizen-centric (the goal of every government site), usability testing should be guided by knowledge of what is most important to the key site user segments and what will have the greatest impact on their future behavior. Usability testing is most effective when it is shaped by voice-of-customer input that’s quantitative as well as qualitative.
Measuring customer satisfaction through online surveys is a convenient and relatively inexpensive way to capture citizen feedback.
But you have to be careful to engineer in the accuracy, precision and consistency that will bring integrity to your satisfaction survey results. One of the most reliable, accurate and credible ways to obtain the online voice of customers is with the methodology of the American Customer Satisfaction Index (ACSI), which is used by more than 90 government Web sites and hundreds of private-sector sites to measure customer satisfaction. (My company, ForeSee Results, partners with the Treasury Department’s Federal Consulting Group to run ACSI surveys for federal sites.)
Although usability testing is a valuable tool, combining such testing with customer satisfaction analytics creates an incredibly effective and focused set of tools that can identify and provide input into enhancing a site to positively impact citizen satisfaction and loyalty.
Following are seven strategic steps to use customer satisfaction analytics to direct usability testing:
Capture Representative Feedback
You can’t possibly fit hundreds or thousands of people into a usability lab. If you begin the process with customer satisfaction analytics, you don’t need to. Although usability labs are limited in terms of the number of people you measure, it’s important to make sure they’re representative of the key target audiences for your site. To recruit the right participants, you’ll need to know who is visiting your site. Sometimes it’s hard to tell. You may be attracting a large audience of loyal regulars or a significant number of first-time visitors. You may serve different types of people, as defined by job, role, reason for coming to the site or other criteria.
Unless you have the right people in your usability lab, the results won’t be accurate. To get the right people, you must first determine who is coming to your site and then select a test group that mirrors your actual site visitors.
Usability testing provides a snapshot — a portrait in time of a select group of visitors to a site. Measuring visitors on your own site, on the other hand, enables you to continuously measure online customer satisfaction with a population that accurately represents your actual site users.
In this way, you can pinpoint external or internal factors that may affect satisfaction or loyalty. A few of these factors, which vary by type of site, include legislation and the time of year. (Some sites get more students during the school year, and they may rate satisfaction differently than adults.)
Make Sure You Ask Questions in the Right Way
There are many different customer analytics available, and you might think that any of them would work just fine as a precursor to your usability testing. It’s important to zero in on the areas that need improvement by asking questions that affect visitor behavior.
One key mistake is asking questions that rely on “self-rated importance.” Visitors may report that something is very important to them (e.g., airline safety) but that factor may have relatively little influence on their behavior (e.g., booking with airlines based on price and flight times versus safety record).
And, the cause-and-effect nature of customer satisfaction, measured correctly, ensures that the questions being asked affect the behaviors that matter.
Precisely Prioritize and Fine-Tune Improvements
You probably know you have some problems with your site. Otherwise, you wouldn’t be thinking about usability. But, where do you begin?
It’s hard to know unless you can prioritize based on what’s most important to your users. That’s where a cause-and-effect metric comes into play. It prioritizes potential areas of Web site improvement based on their impact on satisfaction, which, in turn, influences desired future behaviors. Beginning with this information, you can focus usability testing to delve into specific changes that should be made in high-priority areas.
Use a Credible Methodology that Provides Relevant Benchmarking Capabilities
One way of knowing how well your site is performing is by comparing it against others. There’s really no room for comparison in usability testing. The government uses the ACSI to measure more than 90 federal sites on a quarterly basis; it is also used for private-sector e-commerce and e-business sites, which offer comparative metrics. Besides aggregate benchmark scores, you can also compare against a group of government sites that are like yours.
Gather Baseline Customer Feedback
The ultimate goal of usability testing is to determine how to enhance a site so that it better serves citizens. To know that you’ve reached that goal, it’s critical to first have a baseline measure of customer satisfaction. Using a methodology to measure customer satisfaction before usability testing provides a reliable comparison for a post-redesign measurement.
Test Usability-Driven Improvements with Customer Satisfaction Analytics
The only way to know that you’ve met your goal of effective site enhancements in the eyes of visitors is to ask them. And, the best way to get voice-of-customer feedback following a redesign is to survey customers post-launch and compare to pre-launch measurements.
Customer satisfaction measurement that asks how well the site is meeting the needs and expectations of the site visitors is a good basis of comparison. It also provides some common customer experience metrics that can be used to track trends in future behaviors to ensure that you are meeting your business objectives.