article-main-1024x769.jpg

Understanding click-through rate (CTR) in the context of search satisfaction

Click-through rate (CTR) has historically been an important factor in gauging the quality of results in information retrieval tasks.

In SEO, there has long been a notion that Google uses a metric called Time-To-Long-Click (TTLC), first noted in 2013 by AJ Kohn in this wonderful article.

Since then, Google has released several research papers that elaborate on the complexity of measuring search quality due to their evolving nature.

Most notably:

  • Direct Answers
  • Positional bias
  • Expanding ad results
  • SERP features
  • SERP layout variations

All of these factors can have varying effects on how users interact and click (or don’t click) on Google results for a query.  Google no doubt has various click models that set out expectations for how users should click based on search type and position.

This can be helpful in understanding outlier results either above or below the curve to help Google do a better job with satisfaction for all searches.

Search satisfaction

The reason this is important is that it can help us reframe our understanding of search result clicks away from CTR and TTLC and towards an understanding of search satisfaction.

Our web pages are just a potential part of the entire experience for users. Google released a publication in 2016 called Incorporating Clicks, Attention and Satisfaction into a Search Engine Result Page Evaluation Model.

This paper, along with accompanying code, attempts to use clicks, user attention, and satisfaction to distinguish how well the results performed for the user and to predict user action (which is a required feature in any click model).

The paper goes on to elaborate that the type of searches this model is useful for is long-tail informational searches, because “while a small number of head queries represent a big part of a search engine’s traffic, all modern search engines can answer these queries quite well.” (Citation)

Generally, the model looks at:

  • Attention: A model that looks at rank, serp item type, and the element’s location on the page in conjunction with click, mouse movement and satisfaction labels.
  • Clicks: A click probability model which takes into account SERP position and the knowledge that a result must have been seen to have been clicked.
  • Satisfaction: A model that uses search quality ratings along with user interaction with the various search elements to define the overall utility to the user of the page.

Are clicks really needed?

The most interesting aspect of  this research is the concept that a search result does not actually need to receive a click to be useful.

Users may receive their answer from the search results and not require clicking through to a result, although the paper mentioned that, “while looking at the reasons specified by the raters we found out that 42% of the raters who said that they would click through on a SERP, indicated that their goal was ‘to confirm information already present in the summary.’” (Citation)

Another interesting (and obvious) takeaway across multiple research papers, is the importance of quality raters’ data in the training of models to predict search satisfaction.

None of this should be taken to assume that there is a direct impact on how clicks, attention, or other user-generated metrics affect search results. There have been a number of SEO tests with mixed results that tried to prove click impact on ranking.

At most there seems to be a temporary lift, if any at all. What this would suggest is that, being an evaluation metric, this type of model could be used in the training of internal systems which predict the ideal position of search results.

Click models

Aleksandr Chuklin, a Software Engineer at Google Research Europe and expert in Information Retrieval, published a paper and accompanying website in 2015 that evaluates various click models for web search.

The paper is interesting because it looks at the various models and underlines their various strengths and weaknesses. A few things of interest:

Models can:

  • Look at all results as equal.
  • Look at only results that would have been reviewed (top to bottom).
  • Look at multi-click single session instances.
  • Look at “perseverance” after a click (TTLC).
  • Look at the distance between current click and the last clicked document to predict user SERP browsing.

In addition, this gives some intuition into the fact that click models can be very helpful to Google beyond search satisfaction, by helping them understand the type of search.

Navigational queries are the most common queries in Google and click models can be used to determine navigational as opposed to informational and transactional queries. The click-through rate for these queries is more predictable than the latter two.

Wrapping up

Understanding click models and how Google uses them to evaluate the quality of search results can help us, as SEOs, understand variations in CTR when reviewing Google Search Console and Search Analytics data.

We often see that brand terms have a CTR of sixty to seventy percent (navigational), and that some results (that we may be ranking well for) have lower than expected clicks. Paul Shapiro looked into this in 2017 in a post that provided a metric (Modified z-score) for outliers in CTR as reported in Google Search Console.

Along with tools like this, it is important to understand more globally that Google has come a long way since ten blue links, and that many things have an impact on clicks, rather than just a compelling title tag.

Having established the importance of search satisfaction to Google, is there anything that SEOs can do to optimize for it?

  • Be aware that investigating whether CTR directly affects search is probably a rabbit hole: even if it did, the impact would more than likely be on longer tail non-transactional searches.
  • Google wants to give their users a great experience. Your listing is just a part of that – so make sure you add to the experience.
  • Make sure you understand the Search Quality Evaluator Guidelines. How your site is designed, written, and developed can strongly affect how Google judges your expertise, authority, and trust.

JR Oakes is the Director of Technical SEO at Adapt Partners.


Fatal error: Call to undefined function is_syndicated() in /home/seonews/public_html/wp-content/themes/twentyfourteen/single.php on line 33