How to Benchmark Your SEO Performance
An increasingly common request from in-house SEO teams, or digital marketing managers, is for help figuring out how to benchmark their SEO operations.
This could be from international managers wanting to compare and contrast individual country-level operations, it could be from senior management looking to understand how their investment compares to the competition, or it could be from investors or potential acquirers looking to perform SEO due diligence.
Whatever the reason, the goal is usually to answer questions like:
-
How does our effort and investment compare to the competition? (Or how does it compare across regions)
-
How do our capabilities stack up across different areas of focus and specialism?
-
Are our activities effective?
-
Are we getting the outcomes you might expect given all of the above?
-
Where do our outcomes position us in the market?
Whatever the exact questions being asked, I think it’s important to separate out questions about investment, capabilities, activities, and outcomes. They are obviously all related, but if you look only at outcomes, for example, you can be led astray in either direction:
-
A strong historic base could lead to complacency even as an under-invested program starts to fall behind the competition
-
A lack of previous investment, or poor past decisions could lead to relatively poor current performance even while the capability of the team and current activities are much more promising
Asking the right questions
In order to make the output of an exercise like this useful, it has to be received as reasonably objective, and also distinguish with sufficient resolution between the cases that would lead to different actions.
Where possible, building on quantitative objective data is helpful – whether it is competitor headcount, or some measure of search share of voice, gathering data from an agreed-upon external source adds validity to your findings.
When you have the raw data, though, you are going to want to put it into “buckets” to assign some measure of red / amber / green (RAG) or similar. When you don’t have objective raw data, you are typically skipping straight to this qualitative classification step.
How to add a scoring system to SEO activities
I am a big fan of the exponential scale.
What I mean is, rather than trying to have buckets that are all the same size, I want to have clear granularity at the ends of the scale I really care about. Depending on the assessment, this could be the “failing” end or it could be the “outperforming” end.
I’ve advocated a similar approach with individual progress evaluations – rather than trying to have people grade their skills in particular areas out of 5 or 10 (a process fraught with challenges and ripe for disagreement) – I have had people bucket themselves into the following categories per task/area:
-
No experience – does what it says on the tin – I don’t know about this area / haven’t worked on this / it isn’t relevant to my role
-
Basic competence – I have done some work in this area. You can delegate me tasks but I may need support or I may have a few questions
-
Core competence – I am rarely stumped in this area and can handle poorly-defined tasks in this area with no worries. I can teach, train and manage others’ work in this area
-
Expert – I am one of a small number of people everyone in the company turns to with “1%” problems in this area
-
Renowned expert – I am acknowledged outside the company for being right at the top in this area
Note that the top levels are supremely hard – deliberately so. “Renowned expertise” will be rare even on an amazing team. Not every subject area will have a renowned expert and it’s extremely rare that any one person will have more than one or two areas of renowned expertise.
The benefit of this is that, unlike a linear 5-point scale, the vast majority of a capable team will be rated “3” – core competence, with newer team members “2” – basic competence, and a handful of higher ratings. There is almost no disagreement about where someone sits in the tiers, and even though “3” covers a very wide range of skills, that lack of granularity generally isn’t a problem, because it’s the outliers above and below that are the data points we most often really care about.
A very similar approach can work for the team and organizational evaluations as part of a benchmarking exercise. The key is to make the individual grades as unarguable as possible, in such a way that the aggregated data across categories is interesting and shows the areas of outperformance and underperformance.
Here’s an example of some benchmark grading from a Brainlabs maturity audit:
The other way to make grading as unarguable as possible is to build the elements you can off objective quantitative data. For some sections of a benchmarking report, we will use custom tech to pull a range of data about the sites in question (and sometimes their competitors). By setting firm and fixed quantitative criteria in different areas, you can automate some of the data gatherings and benefit from the objectivity that comes from quantitative data:
One important detail to note when using objective data is that the weightings of the individual data points or the different sections may very well not be uniform. There is no reason to think that structured data is as important as indexability, for example. It’s crucial when gathering quantitative data to consider carefully how to summarise it, and how to present it to ensure that the focus remains on the area of priority rather than becoming a box-ticking exercise.
Presentation tips
Some tips and tricks for presenting this kind of data:
-
Highlighting is your friend – the key insight in benchmarking is that there is a lot of data, but the actually important pieces tend to be few and sparse. Pull them out visually and ensure that the narrative is crafted around them
-
You can summarise good/bad data with tools like:
-
Harvey balls (see below)
-
Traffic lights (sometimes called RAG for Red/Amber/Green)
-
Sparklines (for summarization of multiple trends)
-
This is an example of a Brainlabs presentation side summarising performance with Harvey Balls:
Getting to the right granularity
Most of this work is going to be delivered in some form of document often alongside a presentation to stakeholders. Like all business writing, I believe this means that the best structure is a hierarchical one that gives the overall “answer” upfront, supports that with evidence, and places the detail in appendices for the interested reader to consume at their leisure.
In the case of a benchmarking exercise, that means that the presentation should:
-
Lead with the headline answer(s)
-
Support that perspective with a story per high-level area
-
Drill down into the high-level areas to the level of depth appropriate for the audience in the room
-
Contain the raw row-level data in a separate appendix for review when you come to make a plan to address the shortcomings or develop the next phase of the plan
As an example, this kind of per-country drill-down might form part of the core deliverable for someone overseeing the set of country teams, but might only form part of the appendix for a senior management team wanting oversight of the SEO operation as a whole:
Specifics
There are a lot of similarities between benchmarking and due diligence, so I won’t repeat everything I wrote in performing an external (SEO) due diligence. Instead, I thought it would be useful to outline some of the headings and sections that I would consider including if I were to do a deep dive benchmark of an organic search operation:
The structure might look something like this:
-
Team
-
Management / strategy / direction
-
Resourcing – including market comparison
-
Skillsets / specialisms / specialists
-
Technical
-
Content
-
Creative
-
Analytical
-
-
-
Current performance – based on a balanced digital scorecard approach:
-
Strategy
-
Clarity – does the organization know what the strategy entails, is it clear, and can they articulate it?
-
Alignment – does the strategy map to organizational goals in a clear way, are teams measured and incentivized appropriately?
-
Performance – how on-track is it, and what adjustments are already in motion, or might be needed to bring it on track?
-
-
Market – embedded above, but important enough to summarise in its own section:
What have I missed? What else do you think is critical to include in SEO benchmarking? Drop me a line on Twitter to discuss.
What to do with the information
Actions will of course depend on the details of the discovery – you will need to do very different things if your benchmarking shows you are ahead and accelerating vs behind and falling further behind (or any of the other quadrants) – but the primary actions from a benchmarking perspective are:
-
Gain buy-in – do key stakeholders agree with the underlying data? Are there areas of controversy or concern in the analysis of that data?
-
One example that we have used to good effect is to create a regular newsletter that summarises the findings and movements and is circulated to the appropriate group to ensure that there is a good cadence for reviewing and acting on the data.
-
-
Ensure understanding – have you communicated it clearly, and to the right people? Is it clear to people at the appropriate levels of seniority what it says about their teams, departments, or operations?
-
Build a process – whatever the actions to be taken are, they fall outside the scope of the benchmarking work itself. Depending on the relationship, the benchmarking deliverables could include an element of outlining a process for evaluating next steps, determining progress, and returning to re-benchmark specific crucial areas in the future. In many cases, we find ourselves re-evaluating benchmarks and tracking scores over time in order to be able to dig into changes (or lack of changes!) and understand what is going on in the SEO operation.
Explaining changes and movements
One common request and challenge that I see a lot from in-house teams is explaining changes in performance to the senior team and to the wider business. Benchmarking doesn’t directly solve that problem (for that, you could refer to how to diagnose SEO traffic drops, a flowchart for diagnosing ranking drops and diagnosing traffic drops during a crisis).
What you should expect to do with benchmarking is to be able to explain changes in the benchmark over time. Hopefully, the combination of quantitative/objective data inputs, and the tips in “asking the right questions” makes this a fairly straightforward process.
If you would like help benchmarking your SEO operation, activities, or results, don’t hesitate to get in touch and be sure to join me at SearchLove London in October.