A lot has been written about data collection for landing page testing. Having high enough data rates to statistically prove a winner and avoiding seasonal fluctuations in traffic are two of the many principles that are emphasized as important elements. But there’s another consideration that is often overlooked and can have significant influence over how your results are interpreted: delayed conversions, or “stragglers.”
For many businesses, it’s not unusual for the conversion action to take place hours or days after first hitting the landing page. The important implication for testing is that you have to understand the behavior of your stragglers to ensure you are comparing apples to apples.
Power Options is a client of my company that sells subscription-based access to an advanced stock option research. The company’s model is to offer an unrestricted 14-day trial without requiring any credit card information. So it engaged my company to develop a landing page that would convert more visitors into free trial subscribers.
The original free trial sign-up process involved landing on a very information-rich page, which included a lot of details about stock option trading, Power Options, and related subjects.
A couple of the graphics on the landing page linked to the follow-on page, which is where visitors could register for the free trial. This page also had a lot of text as well as thumbnail screenshots of some sample software reports.
Power Options had a low data rate on the landing page we were testing, so a “coarse granularity” approach and an A/B split test were used. In other words, all ideas were combined into a single best-practices redesign instead of tinkering with a specific landing page element.
In the proposed redesign, the two-page process became a single page. Among the most striking changes was a radical simplification of the page. The focus shifted to a simple description of the free trial offer, eliminating all supporting descriptions of stock option trading and the software.
There was a lot of pushback on the proposed landing page because the client was convinced that this kind of radical simplification would not work with their target audience’s sophistication and interest in research. This assumption formed the foundational question of the landing page test: Did the information-rich original page instill loyalty and provide value to visitors? Or did it serve to scare away prospects with its voluminous text, complicated page layout, and unclear call-to-action?
As it turns out, the answer to both questions was “yes” – to a degree. Although the redesigned (challenger) version of the page ended up having a much higher conversion rate to free trials, the baseline registration process instilled more loyalty and resulted in higher delayed conversions among the stragglers.
Usually we see a clear elbow and then delayed conversions trickling in. Sometimes the slope of the delayed conversion “tail” is proportional to the height of the elbow (when displayed on a logarithmic scale like Figure 4). The percentage of delayed conversions is the same for all tested versions of the landing page. But this is not always the case.
In this test, the challenger version had an elbow at 9.5 percent and an 11 percent final conversion rate (an increase of 16 percent). The original baseline version of the page had an elbow at 5 percent and eventually peaked at 6.5 percent (an increase of 30 percent). So the original baseline had a higher percentage of delayed conversions but a lower overall conversion rate at the end of the day.
Clearly, tracking the actions of the stragglers was critical to predicting the percentage improvement accurately. If we had measured at the elbow point (one-hour delay), we would have predicted a 90 percent improvement (9.5 percent divided by 5 percent). Once we aged the data, we saw the more accurate 70 percent projection (11 percent divided by 6.5 percent). This was very close to the actual 75 percent observed revenue-per-visitor increase after the free trial was completed. Our subsequent analysis showed that the original version influenced a higher percentage of committed visitors to return later, and closed some of the performance gap against our simplified challenger over time.
So keep this lesson in mind when designing your landing page test. You’ve got to understand your visitor behavior and know what percent of conversions tend to be delayed so that you can make a plan for how you’ll treat the stragglers. You basically have two options:
Ignore the stragglers. If your percentage of stragglers is very low, you could simply measure at the elbow point on the graph. In other words, you can disregard all of the conversions that happen after a certain maximum delay time (which corresponds to the elbow for your particular landing page). Since the percentage of stragglers is small, this won’t change your results significantly and you’ll be able to immediately compare the newest results to those from previous data collection periods.
“Age” your data. Age your data from the most recent data collection period to make sure that you’re comparing properly against past periods. As was the case with Power Options, the conversion rate increase will usually begin to flatten out at some point. So you should wait the appropriate amount of time before looking at the conversion rate. Exactly where to make this cutoff is a judgment call, but a good rough guideline is to aim for the point at which you reach 95 percent of the final conversion rate. Depending on your business, this could range from a few days to a few weeks.
This article originally appeared in Tim’s ClickZ column April 17, 2012