The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.
This post is an expansion on something I discussed in my talk at MozCon this year: my view that a lot of time spent on keyword research is essentially wasted.
Don’t get me wrong — keyword research is, of course, important. SEOs and businesses use keyword research to decide which parts of their business to prioritize, to forecast the results of their activities, to appraise possible opportunities for expansion, and of course to write title tags, brief copywriters, or engage in other tactical activity. The point is, if you paid a non-SEO consultant — perhaps a management consultant — for this level of strategic insight, you’d pay a fortune, and you’d listen very carefully.
And yet, in SEO businesses, keyword research is the task most likely to be delegated to the most junior member of the team. It’s considered grunt work. It’s boring, tedious, repetitive, and easy — so we think. I know this, because I have made this (mistaken) assumption many times as a senior SEO, and was on the receiving end of that “grunt work” early in my career.
There are three main ways I think we’re turning what should be an involved piece of strategic thinking into tedium. I’ll cover them below, along with what to focus on instead.
Quantity vs. quality
If you hit up your favorite search engine and look for some guides on how to conduct keyword research, you’ll find that a common theme is to start by amassing the most exhaustive list of potential keywords possible. If you run out of rows in Excel, or cells in Google Sheets, that is seemingly a badge of honor.
Perhaps you’ll use tools like keyword multipliers, Google Search Console, and GA Site Search to add as many obscure variants of your target keywords as you can find.
This is a fool’s errand, though.
The very blog you’re reading right now gets 48% of its daily traffic from keywords that drive only a single click. And it’s not like we’re getting the same selection of low traffic keywords every day, either. Google themselves have said repeatedly that 15% of the keywords they see every day are totally new to them.
In this context, how can we hope to truly capture every possible keyword someone might use to reach our site? It seems entirely pointless.
Why not save ourselves an absolute shit ton of time, and greatly simplify our analysis, by just capturing the few main keywords for each unique intent we wish to target?
It’s easy to produce an enormous list of keywords that contains perhaps three or four intents, but it’s a grand waste of time, as you’ll be producing some small fraction of a vast unknowable sea of keywords, and you’re going to optimize for the main ones anyway. Not to mention, it makes the rest of your analysis a total pain, and extremely difficult to consume afterwards.
Instead, try to capture 90% of the intents for your potential new page, product, or site, rather than 90% of the potential keywords. It’s far more realistic, and you can spend the time you save making strategic choices rather than swearing at Excel. On which note…
Another common piece of advice is to manually use the Google SERPs as a keyword research tool. This is fine in principle, and it’s advice I’ve given, particularly to editorial teams researching individual pieces of content, as it helps to make the research feel more grounded in what they’re actually trying to affect (Google SERPs).
However, for at-scale keyword research conducted by an SEO professional, this is an overly manual and redundant step. Why?
Because you’re probably already doing this, possibly twice, in other parts of your process. If you use a popular SEO suite — preferably Moz Pro, of course, but it’s not just us — this data is very likely already baked into any suggestions you’ve downloaded. Save yourself the manual data collection (or worse yet, the unreliable and finickety SERP scraping on your own personal computer) and just collect this valuable information once.
Similarly, if you’re mainly looking for keywords you ought to rank for rather than the wide open ocean of opportunity, you’ll get 90%+ of that by seeing who your competitors are, and what they rank for that you don’t.
It really doesn’t have to be some massive ordeal. Again, this is about spending more time on the important bit, and less time on the grunt work.
The wrong metrics
“The important bit”, though, is probably prioritization, which means it’s probably about metrics.
Typically, the primary metric involved in keyword research is search volume, and that’s probably unavoidable (although, not all search volumes are created equal — watch out for a Whiteboard Friday on this in the Autumn), but even the most accurate search volumes can miss the full story.
The core issue here is that click-through rates for keywords vary massively. The below range is for a random sample from MozCast:
The chart shows that only around a third of the keywords in this random set had a CTR close to 100% for all organic results combined. It also shows the high variance in total CTRs across the keywords in this group.
This is not untypical, and well-discussed in the SEO space at this point. Many SERPs have organic results that start essentially below the fold. What it means for keyword research is that volume is not that great a metric. It’s an important component — you need both volume and CTR to work out how many clicks might be available — but on its own, it’s a little suspect.
Again, this doesn’t have to be a massive ordeal, though, many tools, including Moz Pro, will give you CTR estimates for your keywords. So in the same place you get your volumes, you can get a metric that will stop you prioritizing the wrong things, or in other words, stop you further wasting your time.
TL;DR: stop wasting your time
There’s a huge amount of skill, nuance, and experience that comes into keyword research that I’ve not covered here. But my hope is that we can get into the habit of focusing on those bits, and not just screaming into the void spreadsheet.