5 Steps To Finding Hidden Website Optimization Gems

When you're trying to optimize your website, sometimes it's hard to sift through all the ideas for boosting your conversion rate. Columnist Brian Massey discusses how to pin down the best approach.

Chat with MarTechBot

ss-website-optimization-web-design

How do you decide which elements of your site to test? This question is at the heart of website optimization.

A better question is, “How do you determine what NOT to test?”

It’s relatively easy to come up with ideas that might increase your conversion rate. We typically come up with 50, 75, 100 or more ideas for each of our client sites. Filtering through this list is the hard part.

Here’s the approach we take at Conversion Sciences (my employer).

Step One: Look For Evidence

You should never test anything if you don’t have some evidence that it is a problem. These ideas are called hypotheses for a reason. A hypothesis is an educated guess, an informed fabrication, a data-based brain fart.

So you need to educate, inform and find data on your ideas, or they don’t qualify as hypotheses. They’re just happy thoughts.

The first benefit of looking for evidence is that you might be able to eliminate a hypothesis. You might find evidence that it’s NOT a problem.

Here’s an example hypothesis for the product page of an ecommerce site: “If we put an ‘Add to Cart’ button at the bottom of the  page, more visitors will add an item to their cart.”

Sounds reasonable. Yet, if few people are scrolling down the page, this hypothesis won’t hold water.

We can look at attention data, or “heat map” data generated by click-tracking and scroll-tracking software such as CrazyEgg. This will tell us how far visitors are scrolling on the product pages of the site.

If they aren’t scrolling far, then we may save this hypothesis for another time.

When we’re identifying what to test, we give each hypothesis a rating from 1 to 5 for how much evidence there is.

A rating of “1” means there’s no evidence, that the hypothesis is just an idea. A rating of “5” tells us that there is overwhelming evidence that there is a problem this hypothesis could address.

I’ve written and talked about the sources of data that are available to help you with this.

Step Two: Rate The Traffic

We want to avoid optimizing the wrong parts of the site. Our hypothesis list should have ideas for site-wide improvement, as well as page-specific enhancements.

Changing the order of the site’s navigation, for example, is a site-wide change. Adding trust symbols to the checkout page is page-specific. If we were to rate the value of the traffic on a scale of 1 to 5 again, what would we give these two scenarios? They both might get a 5.

A site-wide change, such as adjusting the navigation, has an impact on 100% of the visitors. That’s a 5 in my book. Accordingly, changing a page that is only seen by 20% of visitors or less gets a 1.

Visitors to the checkout page often account for a small percentage of viewers. Why give them a 5? Because what this traffic lacks in volume it makes up for in opportunity.

Visitors who are checking out have demonstrated significant buying intent. These visitors are very valuable to us.

Other pages may not get much attention. The “About Us” and “FAQ” pages may not be so interesting to us. They might get a 2 or 3.

Favor hypotheses that have an impact on the most, or most interesting, visitors.

Step Three: How Hard Is It To Test?

For each of our hypotheses, we want to understand what the level of effort might be. It’s easy to change the text of a guarantee or offer. It’s much more difficult to add live chat to a site.

If we use our 1-to-5 scale again, we might give the change in the copy a 1 or a 2. Adding live chat requires hiring a live chat vendor, doing integration and staffing for our chatty visitors. This is a 5 in my book.

You don’t want to favor simple tests for simplicity’s sake. Don’t rush off and test button color just because it’s a 1 on your level-of-effort scale.

Likewise, hold off on swinging for the fences until the low-hanging fruit has been found. Leave your 5s for another time.

Step Four: What Does Experience Tell You?

Finally, gauge the impact you think this hypothesis will have. This is based on your knowledge of your prospects. It is based on what you’ve learned from previous tests you’ve done.

It is based on your experience as an online marketing team. It is based on research you’ve done, such as reading this column.

How about a scale from 1 to 5 again? If you rate a hypothesis as a 1, you’re saying that this is an arbitrary idea. If it has a big impact, that will be a surprise.

If you rate your hypothesis as a 5, you’re saying you believe this change will have a significant impact on the visitors and the site. You’re expecting a big win.

Our intuition can often lead us astray. You will find yourself rating hypotheses higher on this impact scale, not because of your experience, but because you want to try them. Or you might favor one because you like the idea.

These kinds of sentiments don’t belong in a scientific environment like the one we create. However, we cannot ignore the intuition of experienced business people.

This is only one of the four factors we weigh, the others being proof, traffic value and level of effort. A high impact score may tip a hypothesis into the top 10, but only if it has good ratings in other categories.

Once a hypothesis has been proven or disproved, there is no more role for intuition. When the data is there, we favor the data. However, when deciding what to test, we like to mix in a little gut.

Step Five: Bucket The Winners

Once we have ratings for each of the five areas, we can weight a hypothesis. We simply add together the values for Proof, Impact and Visits/Buyer Affected. Then subtract the level of effort (LOE). Here’s what part of a hypothesis list may look like:

The top ten hypotheses reveal an interesting pattern when you bucket them.

The top 10 hypotheses reveal an interesting pattern when you bucket them.

We take one more step and put each of our top hypotheses into one of five buckets:

  1. User Experience: For hypotheses that would alter the layout, design or other user interface and user experience issues.
  2. Credibility and Authority: For hypotheses that address trust and credibility issues of the business and the site.
  3. Social Proof: For hypotheses that build trust by showing others’ experiences.
  4. Value Proposition: For hypotheses that address the overall messaging and value proposition. Quality, availability, pricing, shipping, business experience, etc.
  5. Risk Reversal: For hypotheses involving warranties, guarantees and other assurances of safety.

It’s important to have these buckets, because when we look at the top ten hypotheses shown in the figure, we see that six out of the ten are “User Experience” issues. This gives us a hint about the overall challenge with the site. It’s not well-designed for conversion.

We may spend our initial efforts finding out what kind of user experience these visitors want, since our analysis says that the site doesn’t seem to be giving them what they want.

This is a simplified version of our process. If you’d like a copy of the “ROI Prioritized Hypothesis List” spreadsheet we use daily, send me an email at [email protected].


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Brian Massey
Contributor
Brian Massey is the Conversion Scientist at Conversion Sciences and author of Your Customer Creation Equation: Unexpected Website Forumulas of The Conversion Scientist. Conversion Sciences specializes in A/B Testing of websites. Follow Brian on Twitter @bmassey

Get the must-read newsletter for marketers.