After discussing account strategy, structure, and so on, the prospect said, “Well you could just triple my conversion rate and then we’ll be good to go.” Hot damn, it’s that simple!
Conversion rate optimization is undoubtedly the most influential form of optimization for a direct-response driven website. The prospect was right – tripling conversion rate would solve his problem.
But it’s a bit unrealistic to believe it could be done in one fell swoop. A/B testing an un-optimized, channel-specific conversion flow can and should lead to considerable improvements. Couple that with best practice PPC optimizations and achieving a 3x improvement in CPA, while not normal, isn’t out of the realm of feasibility.
But the purpose of this article is not to harp on the importance of A/B testing conversion flows and implementing best practices in PPC management. Rather, I want to highlight an important analysis to perform when reviewing conversion flow tests.
This past year, my team and I facilitated a conversion flow test for a client. After reviewing the competitive landscape, we decided to leverage some A/B testing software to mimic a competitor we felt was doing a good job. Our aim was to see if their conversion flow proved to be better than ours.
Streamlining User Experience
The premise of the test flow was to minimize the number of clicks prior to conversion. The results of the test appeared clear cut:
The test, despite having a higher CPC, had a considerably higher conversion rate leading to a lower CPA.
However, what makes this website unique is that a conversion is a sign-up and a secondary event, only measurable on the back end and must occur prior to the company actually making money. We call the secondary event a Completion.
Due to the latent nature of Completions, we had to sit tight for 30 days before we could fully understand the impact of the conversion flow test on Completions. This is what ended up happening:
Despite the test flow driving more actual Completions, the control flow outperformed the test due to a significantly higher completion rate. The big question is why?
After digging into the incremental signups driven by the test flow, it became clear that the streamlined flow was actually driving more spammy leads. Unqualified users were signing up just to see what the membership experience was about, with no intention of ever completing the process.
Moreover, due to the more generic nature of the streamlined flow, AdWords also punished us with a lower quality score leading to increased CPCs. The combination of higher CPCs and less qualified sign-ups led to an underperforming flow.
The Value Of User Qualification
The takeaway from our testing process was that removing barriers to entry will absolutely lead to more conversions. However, it’s worth keeping some basic qualifying mechanisms to ensure converting user are qualified. A qualified lead can either be converted naturally or through secondary effort such as remarketing or email drip campaigns.
When seeking to dramatically improve conversion rates, it’s important to slowly evolve the conversion experience to understand the exact impact of each change. Testing apples and oranges may lead to improved performance but also has the potential for latent consequences negating the perceived improvements with little insight into cause and effect.
Opinions expressed in the article are those of the guest author and not necessarily Marketing Land.