Ask almost any SEO what Google uses to rank web sites, and you’ll get roughly the same answer:
- Google has over 200 ranking factors, and their relative weights are unknown to the public.
- Google runs thousands of experiments, so those weights move around.
- Sometimes engineers at Google can confirm things, like page speed being a factor.
Google has worked roughly that way for a while now – putting different ingredients into the recipe via a test, then checking if people like the result. (Good results: users don’t pogo stick back to the search results page after clicking on a result, high click-through rate without changing the query, etc.)
This model still works, roughly, but Google is changing and many of the changes are driven by machine learning and deep learning.
Here’s what that means for marketers.
Machine Learning and Deep Learning Basics
We’ve said this before, but it’s very tempting to describe machine learning and deep learning as using artificial intelligence techniques that work “just like a brain.”
Tempting, because it simplifies things, right?
Most of us can understand “works like a brain” more than “a convolutional neural network that gets better at learning with more inputs.”
The problem with describing it as something that works “like a brain” is two-fold:
- That’s not entirely accurate – deep learning uses techniques that are very different from what a brain does.
- Describing something as a brain gives it hype, which can be dangerous for expectations.
Let’s describe what they actually allow for: machine learning and deep learning systems create pattern recognition systems, like those used for recognizing speech and faces, via supervised and unsupervised training.
It is a big deal in search, where Google has started to use entities rather than words to serve up results that are more tied to the concept behind your search rather than the actual words you used.
Old School SEO – Change the Ingredients
Let’s tie this to how SEOs used to work, and how they should work today and in the near future.
In the old school world of SEO, out of the 200 ranking factors that Google talks about, a few things stand out.
If you have a lot of root-level links, your H1 tag matches the search query, and your browser page title makes your page clickable, and your metakeywords tag (unused by Google for a while now because it has become spammy, but hey, we’re talking old school here) contains the exact keywords the user searched, you’re likely to rank.
It’s that way because Google’s search quality team tested the weights given to those factors, ran satisfaction tests on the result, and figured that was a pretty good result for users.
Google has moved away from things like metatag keywords, but it still works, this way, by and large. Engineers can say with conviction that page speed on mobile isn’t a ranking factor yet, but it will be soon.
Machine learning is about that paradigm changing.
Is Web Usability a Google Search Ranking Factor?
New SEO – Evaluate the Recipe
There’s a clever comparison of what an artificial intelligence does and what a watermelon farmer does over at Moz.
The gist is, watermelon farmers can’t really tell you how they can tell whether a watermelon is ripe.
In much the same way, deep learning can’t really tell you how it evaluated that the photo you are looking at is for your friend, John. It JUST understands that, based on layers of inputs and cycles of facial recognition improvements.
So for search, years from now, Google’s search quality engineers may not be able to tell you whether page speed is a large ranking factor. They might not know if Twitter can make your site rank higher or get indexed faster.
The machine learns from the inputs and they can study that and get better over time, but what the engineers can tell you about is the set of goals for the machines:
- High click-through rate
- Engagement with the search engines results page (SERP)
- Low chance of the searcher pogo-sticking back to the SERP
- High user satisfaction
So in this new-ish paradigm, the SEO will have to think about two sets of things:
- Old-school SEO considerations (for the machine): Inputs, like matching the H1 with a high-traffic search term and making sure the site can be crawled
- UX expert considerations (for people): Outputs, like user satisfaction, bounce rate, and visitor ability to find information
If that sounds a lot like what a conversion rate optimizer or a user experience analyst does, that’s not an accident: that’s the direction Google is heading.
A Lot like CRO
Over time, the jobs of SEOs and CROs will be more tightly integrated. As the visibility into the inputs for ranking factor fades, observing what makes the outputs shine will be more critical.
If you’re an SEO, there’s never been more pressure to learn the UX side of the equation.
Deep Learning References – oldies but goodies: