In the ninth of our 12 days of LPO tips, SiteTuners takes you through the different tuning methods.
Tuning is an exercise in focus.
Even if you know how to conduct the test, if you don’t test the correct elements, no amount of test validity will improve your conversions. Should you understand the elements to test, if you don’t factor in value, you might increase conversions but hurt revenue. Even when you factor in fixed and variable values, your tests may fail due to biased visitors, shifts in traffic mix, seasonal events or technology changes. And then there’s still the actual test – you need to make sure the test you are using is optimal for the type of traffic and conversions you’re getting, and the number of elements and variable interactions you can test.
It’s not all doom and gloom, though – tuning and data-driven decision making are two of the cornerstones of effective digital marketing. But you need to know the size and shape of your goals.
Conversion Rate Optimization (CRO) as it applies to landing pages, when you distill it to its core, is about two things.
1. Deciding what to change
2. Determining the impact
To do this, you have a range of options.
One of the worst ways to decide changes and measure impact is sequential testing. This is better than doing nothing, which many companies are doing, but it also leaves a lot to be desired. Sequential testing is essentially changing the elements on the landing page, then over time reviewing whether the changes increased click-throughs to conversion points and decreased bounce rate.
This is a low-cost way of improving pages, but it is more subject to seasonality than other methods, and it gets tough to compare apples to apples. Additionally, this risks lowering the existing conversion rate while gathering the data, and the data is generally subject to a changing traffic mix along with other things that may taint the results with bias.
By contrast, A-B testing is more stable, useful, and valid. The pages are run in parallel, with about the same amount of traffic, and the math is generally understood, and basic. There’s a range of things that make it attractive:
• Ease of test design
• Ease of implementation
• Ease of analysis
• Ease of explanation
• Flexibility in defining the variable values
For pages and sites with low data rates or limited conversions, there’s nothing like a few A-B tests to get a company going. However, there are a wide set of circumstances where A-B tests are not enough. Consider:
1. There are a limited number of combinations they can accommodate
2. For large data rates, they are inefficient at finding the best combinations of elements
3. They do not consider variable interactions, or how elements affect other elements (headers to CTA buttons, colors to layouts, etc.)
Done correctly, multivariate testing can yield the highest rewards. This approach simultaneously gathers information about multiple variables. You can then conduct an analysis of the data to determine which combination results in the best performance.
Multivariate tests vary in terms of how the data is collected and how the data is analyzed, but when best practices are applied, they address many of the flaws of A-B testing.
1. The combinations that can be used to test can number in the thousands or millions for large data rates
2. They are efficient at finding the best combinations (but they may not be great at determining why those are the best combinations)
3. They take variable interactions into account
The limitation is that these require higher traffic and conversions. The ROI gains that can be made from finding the right combination here will be more dramatic, though; and the technique really shines for high-traffic sites.
That’s it. Any test is still better than just looking at a snapshot of your analytics data, though (or not having any data at all)’ so find the right-sized solution that works for you, and test with a vengeance.
Next: “Assemble the Right Usability Team.” SiteTuners covers the usability team.