On its own, web analytics software is just a window to the past. A testing tool, on the other hand, does not track things downstream like customer lifetime value (CLTV).
Quantitative tools will tell you what’s broken on your site. But you need qualitative tools to come up with ideas about why it’s broken.
In an episode of Landing Page Optimization, Effin Amazing CEO Dan McGaw and SiteTuners CEO Tim Ash discuss why different tools need to go hand in hand if you want good data you can base your decisions on.
Dan and Tim agree that most analytics software act as a tally counter that tells you what happened in the past. Sure, there are predictive tools coming out, but they aren’t there yet.
You need to be wary about the limitations from the get go. And to maximize the value, you need to marry the data you get from tools like Google Analytics and Mixpanel with survey software and testing tools.
Even with testing tools, though, there’s an inherent limitation.
There are some tests that are hugely successful in increasing particular actions, particular conversions, on a page. However, sometimes if you look two steps down the funnel, the test “winner” destroys everything else.
It’s important not to have a myopic view – in that case, web analytics would catch what tests miss. Combining test data with web analytics tends to produce a broader view.
You have a lot of potential goals for the site, but it’s important to try and streamline those, and get as close as possible to one primary metric per path.
The entry point matters – a visitor entering via the home page can have a different success measure compared to someone who’s entering through the newsletter sign up page.
You need to control for where traffic enters your test, and you need to ensure that people who come in through a particular entry point are the only ones counted on a particular test. That ensures you’ll have cleaner data to look at.
It’s all about tracking the right people with the right metrics. If tests have too many goals or vague goals, people tend to end up with bad data.
Not all of your conversions are going to happen there and then.
There are free trials, where the number you care about is not the sign ups, but the total number of those who don’t unsubscribe and proceed to pay you money. Or you could have a subscription model, and drop-offs per month would be the data you need to look at.
When tracking delayed metrics, it’s important to do two things:
- Tag the user IDs where viable, so you can age the data tied to a user and still get the data you need
- Understand what type of delay you need to look at, because there are cases where conversion is going to be 45, even 60 days away
You should also not be solely dependent on your AB testing tool. You should supplement that with web analytics data. Make sure that as you increase conversions for a particular page via split testing, you’re not creating problems further down the line for your customer lifetime value (CLTV).
On top of traditional web analytics tools and split testing software, you need to maximize your view of the customer by also using qualitative tools. Those are tools like Qualaroo that ask visitors particular questions: did the visitors find what they were looking for, did the site carry the product they need, etc.
Sometimes, this is hard for certain marketers to do.
When you look at web analytics, you’re dealing with a base amount of certainty. But with qualitative – user emotions, judgments, opinions – there’s a certain amount of wiggle room, a certain amount of ambiguity.
That’s why it’s important to bring quantitative and qualitative data together. It’s about combining things like pages with high bounce rate, which is a quantitative stat, and then from there figuring out what is similar about all the people who did not bounce – their tasks, their success rates, and other forms of qualitative data.
Figuring Out What to Test
You can use the insights you have gathered from qualitative data to improve your tests.
Marketers can look at the biggest part of the funnel where conversion is low, where there’s a significant drop off, and start asking a lot of questions at that point. That gets a hypothesis out of the situation before testing begins.
You have quantitative which tells you which pages are broken, then you go to your qualitative data to come up with ideas about the reason those pages are broken. Then, you use the testing tools to verify whether your ideas improved the situation.
After that’s done, you can check for seasonality, and you can verify with web analytics data again to see if anything else got broken when you were fixing the original issue, creating a closed loop.
Listen or download the “Winning Through Analytics and Audience Surveying with Dan McGaw” podcast from Cranberry Radio.