A/B Split Testing for Landing Page Optimization

A/B, or split, testing is the simplest form of conversion testing, and can be performed on pages with as few as 10 conversions per day. You compare your original version against one or more alternatives. Once you collect enough data to know which is better, you keep the best performing one.


A/B split testing is named for the two website versions that you are considering. "A" refers to the original or baseline version of your site. "B" refers to the alternative or challenger version. Often A/B splits are done on an ongoing basis. This format is commonly referred to as "champion/challenger".

A/B splits can also be done with more than two versions. Depending on your traffic velocity you can test more than two alternatives (e.g. three alternatives would become "a-b-c-d split testing").

A/B split tests can be conducted on individual page elements such as headlines, calls-to-action, offers, or sales copy. They can also be used to compare completely unrelated versions of your whole page. There are pros and cons to both approaches. If you only test individual elements within their current context, you will not be able to capture the advantages of a comprehensive "clean sheet" page redesign. But you will be able to tell the exact impact of changing the element in question. If you choose to test completely redesigned pages, you will not know which elements contributed to the improved performance. You will also incur higher design costs without knowing if you will have a favorable outcome.

The math of A/B split testing is relatively straightforward. You simply pick a confidence level (how sure you want to be that one version is really better than another), and then wait to collect enough data. The length of the data collection depends on two factors: the traffic data rate to your pages, and the size of the conversion rate differences found in your test. Clearly superior versions of your site will "separate out" much more quickly than two versions with almost identical performance. Tracking your A/B test can be done with simple spreadsheet formulas or through reports in most A/B split testing tools.

Advantages of A/B Split Testing

  • East of test design - Unlike more complicated multivariate tests, split tests do not have to be carefully designed or balanced. You simply decide how many versions you want to test, and then split the available traffic evenly among them. No follow-up tests are required to verify the results—the best performer in the test is declared the winner.
  • Simple to implement - Many software packages are available to support simple split tests. If you are testing granular test elements, you can design, set up your test, and be collecting data literally within a matter of minutes. This can be done in most cases without support from your I.T. department or others within the company. You may even be able to collect the data you need with your existing Web analytics tools.
  • Requires little knowledge of statistics - Only very simple statistical tests are needed to determine the winner. Basically, all you have to do is compare the baseline version to each challenger to see if you have reached your desired confidence level.
  • The only approach available to low data rate sites - If your landing page only has a few conversions per day, you simply cannot use more advanced tuning methods. But with the proper selection of the test variable and alternative values, you can still achieve significant results in a split test. Improvements in the double or even triple digits are not uncommon.

Drawbacks of Split Testing

  • Can only test one page element or design at a time - While you have probably identified a number of potential issues with your landing page, A/B testing requires you to test only one new idea one at a time. You will need to guess which ideas to test first (based on your intuition about which ones might make the most difference).
  • Inefficient data collection - Because they measure more than one variable, multivariate tests make much more efficient use of the data that is collected.
  • No way to discover the importance of page elements - Some A/B tests involve multiple changes, or even a whole-page redesign. In the end, you'll know which version performed best, but you won't know what specific elements contributed to the increase in conversions. My putting all the elements you want to test into a single version, you've aggregated the data and lost the ability to look at them separately.
  • Does not take variable interactions into account - By definition, split tests consider only one variable at a time, so you cannot detect variable interactions. Furthermore, a series of split tests is not the same as a multivariate test with the same variables. Depending on the variable interactions, you may not be able to find the best-performing recipe at all.