Before we dive into the basics of split testing, let’s get a few misconceptions out of the way:
- Split tests are expensive.
- They require a ton of technical knowledge.
- They require a lot of set up time.
If that’s what is keeping your team from running A/B tests, then you should absolutely get started.
It’ll take you a few minutes to set up a test with a basic free tool like Google Content Experiments (although Google now recommends Optimize as their testing platform).
Technical expertise for split testing is also a low bar to clear. If you have someone who can plug in the Google Analytics script for a site, then you have someone who can plug in similar scripts for the testing code.
It’s not the technical aspects that you should worry about – it’s everything else. Split tests should be one part of a multi-layered conversion optimization plan. And to use it that way, you need to understand how to use split tests properly.
Why should companies use split tests?
- If you’re looking to launch a new page or section of the website, or improve the flow of the cart, or find out why people are failing to do what they need to do, you DON’T need a split test. You need a usability test.
- If you’re looking to ensure that features don’t error out when launched, causing conversion dips, you DON’T need a split test. You need to rely on a user acceptance test.
At the most basic level, you’ll need a split test when you have two competing ideas about how to show an important page.
The underlying assumption there is that you should have a process for coming up with those types of ideas regularly. You should not run your split testing efforts as a fully separate thing from your other online marketing efforts.
What’s in a split test?
The basic components of a split test are a champion page and a challenger page:
This is traditionally a high-traffic page (so it’s worth fixing) that either has a high bounce rate or a low conversion rate (so a challenger page is needed).
This is a generally a page similar to the champion, but with a few tweaks made based on theories about how the page can be improved.
What typically happens is, half the traffic gets sent to the champion page, and the other half goes to the challenger page. Some metric like bounce rate or clickthroughs gets measured, and over time, one of the page wins.
What about MVT?
Multivariate tests (MVTs) are a close cousin of split tests.
Instead of testing one variable between two pages – like the placement of the call-to-action or the presence of trust symbols above the fold – multivariate tests split the traffic among multiple pages, testing different combinations of elements at the same time.
There’s a higher traffic bar to clear to hold effective multivariate tests, but if you have a high-traffic site, it can be a more efficient way to test multiple variables compared to running several split tests for the same page.
If you’re just starting out, don’t worry about multivariate tests too much, for now. You’ll get plenty of bang for your buck doing split tests on the most critical pages.
What do you need for split tests?
It doesn’t take a lot to get started with split tests, but you do need to think about a two things:
Split testing tool
The free version of Google Optimize might be a good place to start if you have no experience with split tests, so you can see if it fits your needs.
When you hit the ceiling of what you can do with the tool, you can consider using paid tools. You can get demos from tools like Optimizely or VWO.
As much as possible, you want to ensure that the traffic to the site doesn’t change drastically during the test. That means if your traffic shifts drastically during the holidays, or if you’re expecting a change in the makeup of your traffic, split tests are going to be less reliable.
Are there any companies that shouldn’t run split tests?
Before you actually run your tests, make sure you’re testing areas that can actually be improved by split tests, and you have either the internal resources to run tests or a viable third party to help you conduct the AB tests.
- If you don’t have Google Analytics or a similar tool installed to look for viable pages to test, you may want to delay running AB tests.
- Don’t test areas that have existing crippling errors. Fix the big issues first before running the tests.
- If you have fewer than 10,000 to 20,000 visits per month for traffic based split tests, or below 10 conversions per day, AB testing might not be right for your organization yet.
- If you don’t have someone to plug in the scripts, and make the winner of the test the official version that lives on the site, you might want to delay holding tests.
How do you find pages to test?
There are a few things that can help you find the main pages to test: a web analytics tool, a survey tool, and some basic smarts about how to use both.
From your web analytics tool, you need to filter down to high traffic pages, so the page is worth improving. But it also helps if you find the high traffic pages where people don’t act, so you know you have a lot of room for improvement. Those two things tend to not be all that well correlated, so you have to dig for a bit to find that intersection.
If your web analytics tool is Google Analytics, you can use a handy feature called weighted sort to find pages that are inside that intersection, making hunting for pages a little easier.
From your survey tool, you need to know the tasks people are trying to perform, and how successful they are at those tasks. There are a few questions you need to include in your website survey, but the most critical ones are these:
- What are you looking for on the website?
- Did you find the information you were looking for?
Where in a web analytics tool you’d find the high traffic, low interaction pages, in the surveys, you’d need to find the intersection of two things:
- tasks that people care about most
- tasks where they fail a lot
Once you find that intersection, you can find a critical page in that section to test.
What elements should you test?
Once you have the “champion” page identified based on data from the tools or business priorities, you need to formalize a theory about how to improve the page. Here are a few things you can try:
- Headline. Try to make it clearer on the challenger page. Or have it match the source of the traffic if the source is under your control (email, AdWords, etc.).
- Images. Try to have it be authentic (as opposed to stock photos). And if it’s a face, make it look towards the call-to-action to direct the users’ attention.
- Call-to-action (CTA). Try a different location. Or try to add contrast with the color palette used by the rest of the website, so that it stands out.
- Form fields. Try reducing the number of fields on the challenger page.
What outcomes should I look for?
You can establish a few different kinds of goals for the split tests, depending on what type of page it is.
Reducing the bounce rate can be a viable goal if you are testing an entry point or a page largely designed for navigation.
Clickthroughs to CTAs
Use increased CTA clickthroughs to conversion points if you have a dedicated landing page or a product detail page. That ultimately leads to better sales.
However, this comes with a small caveat: if you have multiple CTAs and they lead to product purchases with different profit margins, you should take that into account. It’s possible to increase CTA clickthroughs while reducing average order value.
How long should the tests run?
Generally, split tests should run as long as it takes to get a result with 90 or 95% confidence that the winner hasn’t been picked based on chance.
Getting to that level requires at least one of two things:
- A very high data rate, so that you can deliver results quickly
- Leaving the test on for a long time, so that you can get a reliable result even if the page being tested isn’t visited very much
The longer you run your test, the higher the seasonal traffic risk. You’ll also be exposed to changes in the set of visitor types your site gets and other factors. This is one of the reasons it’s usually a good idea to test pages that tend to get a lot of traffic regardless of bounces or clickthroughs to CTAs.
One thing to remember – as soon as your tool produces results with the right level of statistical significance, end your test. You run additional risks by leaving tests that should have been concluded running, like search engine penalties.
Do split tests carry any risks for SEO?
Split testing carries some minor search engine optimization risks, but only if you get the technical aspects of split testing wrong.
- If you leave your tests running indefinitely even after your tool has produced a result, you risk running into search engine penalties. Search engine spiders generally like to “see” the same things that users see when they crawl a site. And split tests that run forever basically goes against that.
- When you have a champion page URL and a challenger page URL, it’s generally good practice to point to the champion page as the “canonical” URL. If you’re not familiar with tools like “rel=canonical” in-house, make sure you work with a firm that can use it to make sure your split tests follow best practices.
What are the limitations of split tests?
While split testing is great, you’ll quickly hit the ceiling of its usefulness if you expect more from it than what it is suited to deliver. Split tests are not great at solving some issues:
Hitting the global maxima
Split tests will incrementally get you to the local maxima of what your current website design allows for. For anything more than that, you might need a website redesign – split tests will not get you there.
Fixing fundamental site issues
AB tests tend to be good for individual page improvements. If you have large scale problems like mega-menu problems and broken paths, split tests will not help you with the big picture very much.
Lack of conversion optimization plan
Split tests run once in a blue moon when management has the time for it is a hobby, not a program. You’ll get some gains, but you will not get the momentum you’d typically see from a proper conversion rate optimization plan. Split tests will not get very far as an isolated tool – you need a CRO program that included split tests.
Avoid Split Testing Pitfalls
Split tests are a great way to improve critical pathways on your site – whether they are navigation pages where people tend to bounce or custom landing pages where the lack of clicks to your CTA hurts the business.
When you have your original page as the champion and an idea you’re testing as a challenger, you can settle a lot of debates while using data. This saves the company’s time and energy rather than having protracted interdepartmental arguments about what works on the website.
That said, you do need to avoid some pitfalls:
- Don’t use split tests when what you need is a usability test.
- Don’t use AB tests when what you need is user acceptance testing.
- Don’t assume split tests are right for you automatically. Look at your data first.
- Don’t try to use split tests if you can’t add things like Google Analytics scripts to the site.
- Don’t use AB tests without having a clear goal for the page you’re testing.
- Don’t leave split tests running longer than they need to run. There is a penalty risk.
- Don’t confuse running split tests with having a conversion rate optimization strategy.
If you start having regular split tests as a part of a larger conversion program, you can get pretty significant gains for your site.