Everything you need to know for successful email A/B testing

Just getting started with A/B testing, or looking to fine-tune your strategy? Columnist Scott Heimes walks you through the essentials to help you get the most out of your email marketing program.

Chat with MarTechBot

ss-ab-testing

A/B testing is a familiar tactic to most marketers, and for good reason. Testing every aspect of your email, from the subject line to the body copy to the visuals, will go a long way toward helping you get the best fit for your subscribers.

These days, using data to make business decisions is not simply a useful asset — rather, it is essential if you want to keep up with competitors. Email marketing is no different.

Using gut instinct as the primary factor in decisions will only get you so far, while monitoring click-through rates and using the results to improve your methods will help you see genuine results.

However, implementing a testing program can be confusing if you’re just getting started. Below, I try to clarify some of the common unknowns that surround email A/B testing.

What should my sample size be?

Sample size is the first thing that you must consider when A/B testing. If you’re doing an A/B test on your website or on a landing page, you can sit and wait until enough people visit the site for the results to be statistically significant.

Email marketing is a bit different, as you are tied to your list size, and this will differ from one case to the next. So before you begin each test, remember to outline your goals and the specific metrics that you want to achieve.

One top tip is to send your test message to the smallest portion of your list as possible, while still making sure it’s a statistically significant number. When trying to determine what this is, there’s no generic rule of thumb, but there are multiple sample size calculators online that you can use to help.

Three sites that I would suggest are Evanmiller.org, Cardinal Path and Optimizely.

After tests have been done on the sample size, you can send the final “winning” email to the rest of your recipient list.

The main thing to remember is that when it comes to sample size, you have to work with what you’ve got. So if your list is small to start with, you’ll need to send to a higher percentage of that list.

However, I would always recommend using one of the calculators mentioned above.

What if my audience changes?

When you’re testing the variables in your emails, it’s important to remember that no business stays the same over the course of a few years; as a result, your audience is bound to change.

As your audience changes, you need to think about what resonates with customers and how to adapt accordingly. In our experience, SendGrid’s ideal customer profile has changed over the last few years, and as a result, the design, CTA and content that performed well in testing a few years ago are different from the variations that would win today.

Should I test multiple times, or is twice enough?

There is certainly no harm in testing more than twice, and it can make more sense to do more than two variations.

As I’ve mentioned above, make sure the sample size is statistically significant (instead of dividing your list into more groups than you can handle) — otherwise, your results won’t necessarily be ones you can trust.

I need to compile all of these findings… What’s the preferred method?

I’d create a spreadsheet out of your email service provider’s data exports that outlines all the tests you are running and clearly highlights each of the winning variations. For a wider outreach, I’d tend to include any findings that summarize engagement and conversion-based results.

A/B testing is a great way for email marketers to optimize their email sending strategy, but it isn’t a quick-fix solution. Tests need to be carried out on a continual basis, as your customers’ habits are always changing, whether quickly or gradually.

Maintaining continuity is key if you want to keep on top of what’s interesting to your customer base.

On the surface, A/B testing seems simple: Display two different versions of whatever you’re testing and compare their performance. Essentially, that’s all it is, but to run a proper A/B test, you need to start with a clear idea of what success looks like — for example, what steps you want the recipient to take when your email enters their inbox — and make it obvious what steps are required to get there.

Once a clear objective is in place for your campaign, and you have a method of obtaining statistically significant data, A/B testing will help you get the results.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Scott Heimes
Contributor
Scott Heimes serves as Chief Marketing Officer at SendGrid, where he is responsible for the company's brand strategy, driving demand for its solutions and leading global marketing operations. Scott oversees corporate marketing, demand generation, corporate communications, partnerships and alliances, international expansion and SendGrid’s community development team.

Get the must-read newsletter for marketers.