Opinions expressed in this article are those of the sponsor. MarTech neither confirms nor disputes any of the conclusions presented below.

5 A/B testing resolutions you should embrace in 2017

A year’s beginning is the most ambitious of times for teams and businesses — lots of reflection, planning, strategizing, revamping approaches, right? To get you off to a good start, we have listed five A/B testing resolutions that should take in 2017. A/B testing is a relatively young practice, yet it only takes a Simple […]

Chat with MarTechBot

unnamed (1)

A year’s beginning is the most ambitious of times for teams and businesses — lots of reflection, planning, strategizing, revamping approaches, right? To get you off to a good start, we have listed five A/B testing resolutions that should take in 2017.

A/B testing is a relatively young practice, yet it only takes a Simple A/B Testing Guide and an intuitive A/B testing software to set up the basic process. But it takes a little more deliberate obsession and expertise to squeeze out meaningful results and make decisions that can translate website experiments into business revenue.

  1. Stop running A/A tests

In an A/A test, the same page is tested against itself. A/A tests are run to check if an experiment is set up right and if it’s working. It’s concluded that only if the variations “A” and “A” in the test get an equal response from visitors, the experiment is running all right.

A/A tests are a waste of precious time because they require the same amount of traffic, time and resources that an A/B test requires. Take this example: let’s say the average purchase cycle on a website is four weeks, and the experiment period is two or three weeks. In this case, you might lose visitors to the variation that’s not at its best.

There are a plenty of other effective things one can do to make sure the experiment is running fine — cross-browser testing, device testing, user testing, cross-checking with another source of analytics, and of course, obsessively monitoring the experiment to spot glitches.

But if you still insist on an A/A test, then you could try an A/B/A test. That is, you will run an A/A and an A/B at the same time, and in the same experiment. This will require a little more time and traffic than the usual A/A test because it involves three variations, and traffic routed to each will be smaller.

But once you’ve begun, you will probably agree that a sound QA, cross-referencing of analytics and obsessive monitoring (all together) can prove more efficient than an A/A test.

unnamed (1)

  1. Don’t scrap your inconclusive tests

As exciting as it is to run tests and pick winners, unfortunately, a vast number of these tests results are inconclusive. A lot of time, effort and resources go into each experiment — so it’s simply not wise to scratch off an inconclusive test.

Here are a few things you could try differently this year if your A/B experiments are inconclusive:

  1. Break the data down: Take a closer look at the devices, traffic sources, demography of visitors or whatever you think is relevant to your business. The details in the data will give you insights into which variations clicked with who, what, when or where. For example, it’s possible that one of your variants did well with first-timers, while the other worked with returning visitors.
  2. Iterate your hypothesis: Your hypothesis is the foundation on which your experiments are built. Sometimes your tests are inconclusive because your hypothesis does not take a real stand. You can try to iteratively improve it before you move on to your next one.
  3. Pay attention to micro-conversions: Micro-conversions are the small actions that your visitors take on their journey to conversion. For example, your primary goal (or conversion) can be a purchase, whereas wish-listing, adding to cart and other actions can be your micro-conversions. When a test is inconclusive, you can go with a variation that favors micro-conversions and then improves it.
  4. Continue optimizing for discovery and learning: Experimentation is not just about tests and winners, it’s learning about and knowing your visitors. If you look beyond results, you’ll encounter an ocean of discoveries that will eventually help you catalyze conversions.
  1. Repeat A/B tests

Your experiment does not end when it reaches a meaningful statistical significance (considering you had the right sample size and an appropriate duration for the test). If you go just by the statistical significance, you might not see results translating into business revenue.

A test should run at least for one entire business cycle, which includes things like different days of the week, diverse traffic sources, your blogs and newsletter schedule or any other external events.

Even after having run the test for an entire business cycle, it’s only smart to repeat the test after an appropriate interval (maybe a few weeks or months, depending on your business cycle). It’s only pragmatic to assume that there are going to be a lot of external influences on the test which might change from one business cycle to another — like the marketing strategy, the targeted audience, a season of the year and so on..

  1. Your A/B tests need user feedback, too

A/B testing tells you which variations clicked with your customer and those that didn’t. An A/B test can only go so far. It cannot tell you what your visitors didn’t like, or why they chose one variation over another, or if your winning variation still has a scope for improvement. The answers to these questions can only come from your visitors themselves.

Polls and feedback sessions on your websites can help you trigger action-based polls now and then to understand the reason behind visitor actions.

Here’s an example of how it can work:

  • Use A/B tests to figure what pages are faring well with visitors in terms of conversions.
  • Simultaneously run polls to understand the reason behind visitor actions on experiment pages.
  • When you determine a winning variant, make changes to the elements that favored the winning variant.
  • After you’ve made the change, ask your visitors for feedback and encourage them to express their opinions on the changes you made.

When analytics and user feedback go hand-in-hand, the data and results from your experiments gain additional value.

5. Run tests for mobile visitors exclusively

It’s easiest to run an A/B test on your entire traffic all at once, but that’s not going to give accurate results for many reasons.

  1. Mobile visitors and desktop visitors have different attention spans, motives and usage of the website. When a visitor accesses your site from a mobile, it’s a possibility that they are traveling or are at a friend’s place or a queue in a movie theater — meaning their attention span is relatively short. Also, their motives of browsing could be for information unlike on a desktop, where there is more probability of a purchase. So it’s important you run exclusive experiments to understand what works for them.
  2. When you make changes to your desktop site using an A/B testing editor, there is a good chance that it does not take into account your responsive designs. Visitors from other devices should not be included in an experiment that’s not optimized for them.
  3. You might want to measure different goals for mobile visitors and desktop visitors. A lot of people browse for information on mobile but switch to a desktop to finish the purchase. So your conversion goal for mobile visitors might be signups. For desktop visitors, it could be a purchase.
  4. Look at this scenario: Let’s say variation A worked well with mobile visitors and Variation B worked equally well with desktop visitors. The results will cancel each other out if a single experiment is run on the entire traffic.
  5. A lot of conversion optimizers segment the results by devices after the experiment. They try to see how each variation performed with mobile, tablet and desktop users by segmenting the source of traffic of all the conversions.This is not an ideal practice. Let’s say your mobile:desktop visitors split is 25:75. And your experiment announces a winning variation, you will have to see if your variations had enough traffic from both mobile and desktop to point to a winner. Otherwise, the results might not apply to both segments, and you’ll have to run the experiment until there is more traffic.
  6. Instead, if you run two exclusive experiments, you will end up with sharper and precise results. Besides, the setup and QA are also going to be easy and thorough.

One last thing: Test every single day.

Unless you have a real reason for not running a test, you should be running one. When you have zero tests running, it means that you are losing that traffic to “not testing.” And you never know if you’ll have that traffic when you are ready to launch a test. In website experimentation, traffic is like oxygen: you cannot afford to exhaust it without using it.

A Chinese proverb reads, “The best time to plant a tree was 20 years ago. The second best time is now.” So, don’t regret the time and traffic you lost — start now and let A/B testing bring you plentiful conversions in 2017!


About the author

Sponsored Content: Zarget
Zarget is an 'all-in-one' conversion rate optimization software. It offers a suite of analytics tools that help observe, track and drive visitor behaviour on websites to optimize conversions. Tools include - A/B testing, Heatmaps, Funnel Analysis, Form Analytics, Polls and Feedback, and more. Visit us at https://zarget.com. You can find us at Facebook and on Twitter.

Get the must-read newsletter for marketers.