One of the joys of measuring usability on websites is being able to continually improve the user experience. During usability testing, I often observe the effect of improved interfaces first hand.
Additionally, I commonly observe a shift in perception. Before usability testing, many users believe they are at fault, the ones who are making mistakes. After usability testing, users realize that they did not make mistakes. The poor or substandard interface is the reason for a compromised user experience.
- Effectiveness: Can users achieve their desired goals on your website?
- Efficiency: How quickly and easily can users achieve their goals on your website?
- Learnability: Is the website easy to learn the first time users encounter it?
- Memorability: When users return to the website after a period of not using it, how easily can they reestablish proficiency?
For today’s article, I will go over the final two usability items: Error Handling and User Satisfaction.
Error Handling, Prevention & Recovery
When I analyze a website, inevitably I spend considerable time on error prevention and defensive design. No website is perfect to all users because users have different mental models and context. Nevertheless, through heuristic evaluation and performance tests, website owners can improve the user experience on their web pages.
- What errors do users typically make?
- How many errors?
- How severe are the errors?
- Deal breakers
- How easily can users recover from the errors?
Some errors are deal breakers, and they absolutely must be addressed and fixed. Some errors are infrequent and not so severe, errors where site visitors can easily recover, such as fixing a simple typo.
If I am unable to conduct usability tests on a website, what I try to do is make purposeful errors on a site so I can see how the interface has (or has not) implemented defensive design.
For example, if a site has a search engine, I will deliberately misspell a word to see if the search engine has a spelling correction algorithm. Here is an example of search results for a typo in the U.S. Center for Disease Control’s site search engine:
Here is an example of spelling correction on a web search engine, Google:
Ideally, if I were to do a usability evaluation on a site, I would have a minimum of three professional evaluators. Usability guru Jakob Nielsen determined that single evaluators found approximately 35% of the usability problems in the interfaces; whereas, five evaluators found between 55-90% of the usability problems.
I also make deliberate errors on forms (such as a Contact Us page) and shopping cart pages to see how the interface responds. Here are some before-and-after interfaces:
Another alternative is to simply allow users to click or press the Continue button if they do not wish to create an account at that time. Usability expert Jared Spool calls this option The $300 Million Button.
It is best if the site implements a careful, defensive design from the outset. In other words, it is best to prevent usability and findability problems from occurring in the first place.
I find that the combination of a professional heuristic evaluation and subsequent usability testing gets the best results in terms of the quality and quantity of possible errors. It’s imperative that website owners understand the mental models of their target audience. Anything you can do to improve your interface for a positive UX is a step in the right direction.
The user satisfaction metric can be tricky to measure because people often confuse a usability evaluation with a focus group.
Usability tests tend to be task/performance oriented. Usability test results are not a bunch of focus-group opinions. Usability tests are often conducted one person at a time so there is no herd mentality – no one user influences the behaviors of another user.
Like usability tests, focus groups can provide direct, rapid feedback to help identify website problems. However, focus groups are more about opinions than tasks. All too frequently, the more outspoken people in the focus group can influence others’ feedback.
User satisfaction is directly related to task completion. If users can complete their assigned tasks quickly and easily, they often report higher satisfaction, especially if there is an element of delight in the interaction. If users have a difficult time completing their tasks or cannot complete them at all, they often report low satisfaction.
Not only do we want to know the quantitative data (a rating from 1 to 7, averaging the scores to get an average satisfaction measure), we also want to understand the contextual data. Why did users report satisfaction or dissatisfaction with a website? The hows of usability are equally important as the whys of usability.
Effectiveness, Efficiency, Learnability, Memorability, Error Prevention, and User Satisfaction. These are the items usability professionals measure to improve the user experience. How does your website measure up?
(Stock images via Shutterstock.com. Used under license.)
Opinions expressed in the article are those of the guest author and not necessarily Marketing Land.