All posts by Rebecca Sentance

mobile-traffic-percentage.png

Why SEOs can’t afford to wait around for a mobile-first index

We’re often told that the web is increasingly mobile, and that it is imperative for businesses to adapt their marketing strategies to be ‘mobile-first’ in order to capitalize on this shift in internet behavior.

But just how mobile is the web in 2017, and what does this mean for search?

Leading SEO and content performance platform BrightEdge today released a new report which sheds light on this question, and on the steadily widening gap between mobile and desktop search.

I spoke to Erik Newton, VP of Customer Marketing and Head of SEO at BrightEdge, about the report’s findings, Google’s mobile-first index tests, and how SEOs can adapt their strategy to account for the increasing divergence between desktop and mobile.

Majority mobile: 57% of web traffic is now mobile & tablet devices

In one of the key findings of the research, BrightEdge reports that 57% of web traffic now originates from mobile and tablet devices – meaning that close to 6 out of every 10 consumers are using a mobile device. Businesses who still aren’t optimizing for mobile, therefore, are ignoring a decisive majority of potential customers.

Even more noteworthy is the finding that the same query on the same search engine generates a different rank on mobile and desktop 79% of the time.

Among the top 20 ranked results, the gap is less pronounced, with 47% of queries differing between devices – but this still means that close to half of rankings differ.

Why SEOs can’t afford to wait around for a mobile-first index

And 35% – more than a third – of the time, the first page that ranked for any given domain was different between mobile and desktop SERPs.

In a press release about the research, BrightEdge commented that these figures indicate a “significant shift to a new mobile-first index”. I asked Erik Newton whether this means that BrightEdge believes Google’s mobile-first index is already being rolled out. Most SEOs believe we are still awaiting the official launch of the new index, but is BrightEdge seeing otherwise?

“We are seeing a divergence of rank and content between the two devices, and we have seen the data move in both directions over the last few months,” says Newton. “We believe that Google is testing and calibrating, as they have with other major shifts, to prepare for the separate mobile index.”

This fits with Google’s usual M.O. around big algorithm updates, but it also means that whatever strategies SEOs are planning to deploy when the mobile-first index finally rolls around, now might be the time to start testing them.

And for those who are still biding their time, they may already be losing out.

How are businesses really doing on mobile?

In the marketing industry, we’ve been talking for what feels like years, with increasing urgency, about the need for our campaigns and our web presences to be mobile-friendly. Or mobile-responsive. Or mobile-first.

But how are businesses really doing with this? Are marketers doing enough, even in 2017, to optimize for mobile?

“For most of the businesses that grew up on desktop, we see them using a desktop frame of reference,” observes Erik Newton. “We see evidence of this tendency in web design, page performance, analytics, and keyword tracking.

“We believe that Google gives the market signals to move forward and toward mobile faster. This is one of those times to push harder on mobile.

“Some of the newer companies, however, are mobile-first and even mobile-only. They are more likely to be app-based, and have always had majority mobile share.”

Why SEOs can’t afford to wait around for a mobile-first index

As we’ve seen from the figures cited in the previous section, using desktop as a frame of reference is increasingly short-sighted given the widening gap between desktop and mobile rankings. But how, then, should marketers plan their search strategy to cater to an increasing disparity between the two?

Should they go so far as to split their SEO efforts and cater to each separately? Or is there a way to kill two birds with one stone?

“The research report has some specific recommendations,” says Newton.

“One – Identify and differentiate mobile versus desktop demand.

“Two, design and optimize websites for speed and mobile-friendliness. Three, use a responsive site unless your business is app-based and large enough to build traffic through app distribution.

“Four, understand different online consumer intent signals across desktop and mobile devices. Five, produce separate mobile and desktop content that resonates on multiple device types.

“Six: focus on optimizing mobile content and mobile pages to improve conversions. Seven: track, compare, and report mobile and desktop share of traffic continuously.

“Eight, measure and optimize the page load speed of the mobile and desktop sites separately. And nine, track your organic search rank for mobile and desktop separately.

“The first challenge is to be even equally attentive to both mobile and desktop. We find that many brands are not acutely aware of the basic stat of mobile share of traffic.

“Additionally, brands can analyze the mobile share among new visitors, or non-customers, to see what kind of a different role it can play for people at different stages of the customer journey. For example, my mobile traffic is 32% higher among new visitors than overall visitors, and my mobile-blog-non-customer is 58% higher. That’s a place I should be leaning in on mobile when communicating to non-customers.

“Brands do not need to split their SEO efforts, but they do need to decide that some content efforts be mobile-first to be competitive.”

It can be difficult for brands who have traditionally catered to desktop users and who are still seeing success from a desktop-focused strategy to break away from this mindset and take a gamble on mobile. However, the figures are convincing.

What’s most evident is that it isn’t enough for SEOs and marketers to wait around for the launch of Google’s mobile-first index: it’s already being tested, and when combined with the growing proportion of mobile web traffic, brands who wait to develop a mobile-first strategy are increasingly likely to miss out.

womens-shoes.jpg

A world without “(not provided)”: How unlocking organic keyword data leads to a better web

Beginning in 2011, search marketers began to lose visibility over the organic keywords that consumers were using to find their websites, as Google gradually switched all of its searches over to secure search using HTTPS.

As it did so, the organic keyword data available to marketers in Google Analytics, and other analytics platforms, slowly became replaced by “(not provided)”. By 2014, the (not provided) issue was estimated to impact 80-90% of organic traffic, representing a massive loss in visibility for search marketers and website owners.

Marketers have gradually adjusted to the situation, and most have developed rough workarounds or ways of guessing what searches are bringing customers to their site. Even so, there’s no denying that having complete visibility over organic keyword data once more would have a massive impact on the search industry – as well as benefits for SEO.

One company believes that it has found the key to unlocking “(not provided)” keyword data. We spoke to Daniel Schmeh, MD and CTO at Keyword Hero, a start-up which has set out to solve the issue of “(not provided)”, and ‘Wizard of Moz’ Rand Fishkin, about how “(not provided)” is still impacting the search industry in 2017, and what a world without it might look like.

Content produced in association with Keyword Hero.

“(not provided)” in Google Analytics: How does it impact SEO?

“The “(not provided)” keyword data issue is caused by Google the search engine, so that no analytics program, Google Analytics included, can get the data directly,” explains Rand Fishkin, founder and former CEO of Moz.

“Google used to pass a referrer string when you performed a web search with them that would tell you – ‘This person searched for “red shoes” and then they clicked on your website’. Then you would know that when people searched for “red shoes”, here’s the behavior they showed on your website, and you could buy ads against that, or choose how to serve them better, maybe by highlighting the red shoes on the page better when they land there – all sorts of things.”

“You could also do analytics to understand whether visitors for that search were converting on your website, or whether they were having a good experience – those kinds of things.

“But Google began to take that away around 2011, and their reasoning behind it was to protect user privacy. That was quickly debunked, however, by folks in the industry, because Google provides that data with great accuracy if you choose to buy ads with them. So there’s obviously a huge conflict of interest there.

“I think the assumption at this point is that it’s just Google throwing their weight around and being the behemoth that they can be, and saying, ‘We don’t want to provide this data because it’s too valuable and useful to potential competitors, and people who have the potential to own a lot of the search ranking real estate and have too good of an idea of what patterns are going on.

“I think Google is worried about the quality and quantity of data that could be received through organic search – they’d prefer that marketers spend money on advertising with Google if they want that information.”

Where Google goes, its closest competitors are sure to follow, and Bing and Yandex soon followed suit. By 2013, the search industry was experiencing a near-total eclipse of visibility over organic keyword data, and found itself having to simply deal with the consequences.

“At this point, most SEOs use the data of which page received the visit from Google, and then try to reverse-engineer it: what keywords does that page rank for? Based on those two points, you can sort of triangulate the value you’re getting from visitors from those keywords to this page,” says Fishkin.

However, data analysis and processing have come a long way since 2011, or even 2013. One start-up believes that it has found the key to unlocking “(not provided)” keyword data and giving marketers back visibility over their organic keywords.

How to unlock “(not provided)” keywords in Google Analytics

“I started out as a SEO, first in a publishing company and later in ecommerce companies,” says Daniel Schmeh, MD and CTO of SEO and search marketing tool Keyword Hero, which aims to provide a solution to “(not provided)” in Google Analytics. “I then got into PPC marketing, building self-learning bid management tools, before finally moving into data science.

“So I have a pretty broad understanding of the industry and ecosystem, and was always aware of the “(not provided)” problem.

“When we then started buying billions of data points from browser extensions for another project that I was working on, I thought that this must be solvable – more as an interesting problem to work on than a product that we wanted to sell.”

Essentially, Schmeh explains, solving the problem of “(not provided)” is a matter of getting access to the data and engineering around it. Keyword Hero uses a wide range of data sources to deduce the organic keywords hidden behind the screen of “(not provided)”.

“In the first step, the Hero fetches all our users’ URLs,” says Schmeh. “We then use rank monitoring services – mainly other SEO tools and crawlers – as well as what we call “cognitive services” – among them Google Trends, Bing Cognitive Services, Wikipedia’s API – and Google’s search console, to compute a long list of possible keywords per URL, and a first estimate of their likelihood.

“All these results are then tested against real, hard data that we buy from browser extensions.

“This info will be looped back to the initial deep learning algorithm, using a variety of mathematical concepts.”

Ultimately, the process used by Keyword Hero to obtain organic keyword data is still guesswork, but very advanced guesswork.

“All in all, the results are pretty good: in 50 – 60% of all sessions, we attribute keywords with 100% certainty,” says Schmeh.

“For the remainder, at least 83% certainty is needed, otherwise they’ll stay (not provided). For most of our customers, 94% of all sessions are matched, though in some cases we need a few weeks to get to this matching rate.”

If the issue of “(not provided)” organic keywords has been around since 2011, why has it taken us this long to find a solution that works? Schmeh believes that Keyword Hero has two key advantages: One, they take a scientific approach to search, and two, they have much greater data processing powers compared with six years ago.

“We have a very scientific approach to SEO,” he says.

“We have a small team of world-class experts, mostly from Fraunhofer Institute of Technology, that know how to make sense of large amounts of data. Our background in SEO and the fact that we have access to vast amounts of data points from browser extensions allowed us to think about this as more of a data science problem, which it ultimately is.

“Processing the information – the algorithm and its functionalities – would have worked back in 2011, too, but the limiting factor is our capability to work with these extremely large amounts of data. Just uploading the information back into our customers’ accounts would take 13 hours on AWS [Amazon Web Services] largest instance, the X1 – something we could never afford.

“So we had to find other cloud solutions – ending up with things that didn’t exist even a year ago.”

A world without “(not provided)”: How could unlocking organic keyword data transform SEO?

If marketers and website owners could regain visibility over their organic keywords, this would obviously be a huge help to their efforts in optimizing for search and planning a commercial strategy.

But Rand Fishkin also believes it would have two much more wide-reaching benefits: it would help to prove the worth of organic SEO, and would ultimately lead to a better user experience and a better web.

A world without “(not provided)”: How unlocking organic keyword data leads to a better web

“Because SEO has such a difficult time proving attribution, it doesn’t get counted and therefore businesses don’t invest in it the way they would if they could show that direct connection to revenue,” says Fishkin. “So it would help prove the value, which means that SEO could get budget.

“I think the thing Google is most afraid of is that some people would see that they rank organically well enough for some keywords they’re bidding on in AdWords, and ultimately decide not to bid anymore.

“This would cause Google to lose revenue – but of course, many of these websites would save a lot of money.”

And in this utopian world of keyword visibility, marketers could channel that revenue into better targeting the consumers whose behavior they would now have much higher-quality insights into.

“I think you would see more personalization and customization on websites – so for example, earlier I mentioned a search for ‘red shoes’ – if I’m an ecommerce website, and I see that someone has searched for ‘red shoes’, I might actually highlight that text on the page, or I might dynamically change the navigation so that I had shades of red inside my product range that I helped people discover.

“If businesses could personalize their content based on the search, it could create an improved user experience and user performance: longer time on site, lower bounce rate, higher engagement, higher conversion rate. It would absolutely be better for users.

“The other thing I think you’d see people doing is optimizing their content efforts around keywords that bring valuable visitors. As more and more websites optimized for their unique search audience, you would generally get a better web – some people are going to do a great job for ‘red shoes’, others for ‘scarlet sandals’, and others for ‘burgundy sneakers’. And as a result, we would have everyone building toward what their unique value proposition is.”

Daniel Schmeh adds that unlocking “(not provided)” keyword data has the ability to make SEO less about guesswork and more substantiated in numbers and hard facts.

“Just seeing simple things, like how users convert that use your brand name in their search phrase versus those who don’t, has huge impact on our customers,” he says. “We’ve had multiple people telling us that they have based important business decisions on the data.

“Seeing thousands of keywords again is very powerful for the more sophisticated, data-driven user, who is able to derive meaningful insights; but we’d really like the Keyword Hero to become a standard tool. So we’re working hard to make this keyword data accessible and actionable for all of our users, and will soon be offering features like keyword clustering – all through their Google Analytics interface.”

To find out more about how to unlock your “(not provided)” keywords in Google Analytics, visit the Keyword Hero website.

video-rich-result-658x1024.png

Five important updates to Google semantic search you might have missed

What is semantic search? Broadly speaking, it’s a term that refers to a move towards more accurate search results by using various methods to better understand the intent and context behind a search.

Or as Alexis Sanders very eloquently explained it on the Moz Blog,

“The word “semantic” refers to the meaning or essence of something. Applied to search, “semantics” essentially relates to the study of words and their logic. Semantic search seeks to improve search accuracy by understanding a searcher’s intent through contextual meaning. […] Semantic search brings about an enhanced understanding of searcher intent, the ability to extract answers, and delivers more personalized results.”

Google is constantly making tweaks and changes to its documentation and features linked to semantic search. Many of these involve things like structured data and Schema.org, rich results, Knowledge Graph and so on, and the vast majority go unannounced and unnoticed – even though they can make a significant difference to the way we interact with search.

But there are some eagle-eyed members of the search community who keep tabs on changes to semantic search, and let the rest of us know what’s up. To aid in those efforts, I’m rounding up five recent important changes to semantic search on Google that you might not have noticed.

100% of the credit for these observations goes to the Semantic Search Marketing Google+ group (and specifically its founder Aaron Bradley), which is my source for all the latest news and updates on semantic search. If you want to keep in the loop, I highly recommend joining.

Videos and recipes are now accessible via image search

Earlier this week, Google made a telling addition to its documentation for videos, specifying that video rich results will now display in image search on mobile devices, “providing users with useful information about your video.”

A mobile image search for a phrase like “Daily Show Youtube” (okay, that one’s probably not going to happen organically, but I wanted to make the feature work) will fetch video thumbnails in among the grid of regular image results, which when selected, unfold into something like this:

You then need to select “Watch” or the title of the video to be taken to the video itself. (Selecting the image will only bring up the image in fullscreen and won’t redirect you to the video). So far, video rich results from YouTube and Wistia have been spotted in image search.

Google’s documentation for recipes also now features a similar addition: “Rich results can also appear in image search on mobile devices, providing users with useful information about your recipe.” So now you can do more than just stare at a mouthwatering picture of a lasagna in image search – you might be able to find out how it’s made.

Google’s documentation gives instructions on how to mark up your videos and recipes correctly, so that you can make sure your content gets pulled through into image search.

Rich cards are no more

RIP, rich cards. The term introduced by Google in May 2016 to describe the, well, card-style rich results that appear for specific searches have now been removed from Google Developers.

As identified by Aaron Bradley, Google has made changes to its ‘Mark Up Your Content Items’ on Google Developers to remove reference to “rich cards”. In most places, these have been changed to refer to “rich results”, the family of results which includes things like rich cards, rich snippets and featured snippets.

Five important updates to Google semantic search you might have missed

There’s no information as to why Google decided to retire the term; I think it’s usefully descriptive, but maybe Google decided there was no point making an arbitrary distinction between a “card” and a “non-card” rich result.

It may also have been aiming to slim down the number of similar-sounding terms it uses to describe search results with the addition of “enriched search results” to the mix – more on that later.

Google launches structured data-powered job postings in search results

Google has added another item to the list of things that will trigger a rich result in search: job postings.

This change was prefigured by the addition of a Jobs tab to Google’s ‘Early Access and partner-only features’ page, which is another good place to keep an eye out for upcoming developments in search.

Google also hinted at the addition during this year’s Google I/O, when it announced the launch of a new initiative called ‘Google for Jobs’. In a lengthy blog post published on the first day of the conference, Google CEO Sundar Pichai explained the advent of Google for Jobs as forming part of Google’s overall efforts towards “democratizing access to information and surfacing new opportunities”, tying it in with Google’s advances in AI and machine learning.

“For example, almost half of U.S. employers say they still have issues filling open positions. Meanwhile, job seekers often don’t know there’s a job opening just around the corner from them, because the nature of job posts—high turnover, low traffic, inconsistency in job titles—have made them hard for search engines to classify. Through a new initiative, Google for Jobs, we hope to connect companies with potential employees, and help job seekers find new opportunities.”

The new feature, which is U.S.-only for the time being, is being presented as an “enriched search experience”, which is another one of Google’s interesting new additions to semantic search that I’ve explored in full below.

And in a neat tie-in, reviews of employers are now due to be added in schema.org 3.3, including both individual text reviews and aggregate ratings of organizations in their role as employer.

Google introduces new “enriched search results”

Move over rich results – Google’s got an even better experience now. Introducing “enriched search results”, a “more interactive and enhanced class of rich results” being made available across Google.

How long have enriched search results been around? SEO By the Sea blogged about a Google patent for enriched search results as far back as 2014, and followed up with a post in 2015 exploring ‘enriched resources’ in more detail.

However, in the 2014 post Bill Slawski specifically identifies things like airline flights, weather inquiries and sports scores as triggering an enriched result, whereas in its Search Console Help topic on enriched search results, Google specifies that this experience is linked to job postings, recipes and events only.

According to Google:

“Enriched search results often include an immersive popup experience or other advanced interaction feature.”

Google also specifies that “Enriched search enables the user to search across the various properties of a structured data item; for instance, a user might search for chicken soup recipes under 200 calories, or recipes that take less than 1 hour of preparation time.”

Judging by this quote, enriched search results are a continuation of Google’s overall strategy to achieve two things: interpret and respond to more in-depth search queries, and make the SERP more of a one-stop-shop for anything that a searcher could need.

We’ve seen Google increasingly add interactive features to the SERP like new types of rich result, and Google Posts, while also improving its ability to interpret user intent and search context. (Which, as we established earlier, is the goal of semantic search). So in the recipe example given above, a user would be able to search for chicken soup recipes with under 200 calories, then view and follow the recipe in a pop-up, all without needing to click through to a recipe website.

Needless to say, this could be bad news for website traffic and click-throughs – even more than featured snippets, answer boxes, the knowledge graph, quick answers and other rich results already are.

Google makes a whole host of changes to its structured data developer guides

Finally, Google has made a wide-ranging set of changes to its structured data developer guides. I recommend reading Aaron Bradley’s post to Semantic Search Marketing for full details, but here are some highlights:

  • Guides are now classified as covering the following topics: structured data, AMP, mobile friendly design
  • Structured data has a new definition: it is now defined by Google as “a standardized format for providing information about a page and classifying the page content.” The old definition called it “a text-based organization of data that is included in a file and served from the web.” This one definitely seems a little clearer.
  • Twice as many items now listed under “Technical guidelines”, including an explanation of what to do about duplicate content
  • There is now less emphasis on the Structured Data Testing Tool, and more on post-publication analysis and testing – perhaps Google is trying to get users to do more of their own work on structured data markup, rather than relying on Google’s tool?
  • All content types are now eligible to appear in a carousel.

If you enjoyed this post, don’t miss Clark Boyd’s exploration of what semantic search means today in the wider context of the industry: ‘Semantic Search: What it means for SEO in 2017‘.

parenting-keywords-1024x576.png

What we learned from SEO: The Movie

Have you ever wished for a nostalgic retrospective on the heyday of SEO, featuring some of the biggest names in the world of search, all condensed into a 40-minute video with an admittedly cheesy title?

If so, you’re in luck, because there’s a documentary just for you: it’s called SEO: The Movie.

The trailer for SEO: The Movie

SEO: The Movie is a new documentary, created by digital marketing agency Ignite Visibility, which explores the origin story of search and SEO, as told by several of its pioneers. It’s a 40-minute snapshot of the search industry that is and was, focusing predominantly on its rock-and-roll heyday, with a glimpse into the future and what might become of SEO in the years to come.

The movie is a fun insight into where SEO came from and who we have to thank for it, but some of its most interesting revelations are contained within stories of the at times fraught relationship between Google and SEO consultants, as well as between Google and business owners who depended on it for their traffic. For all that search has evolved since Google was founded nearly two decades ago, this tension hasn’t gone away.

It was also interesting to hear some thoughts about what might become of search and SEO several years down the line from those who’d been around since the beginning – giving them a unique insight into the bigger picture of how search has changed, and is still changing.

So what were the highlights of SEO: The Movie, and what did we learn from watching it?

The stars of SEO

The story of SEO: The Movie is told jointly by an all-star cast of industry veterans from the early days of search and SEO (the mid-90s through to the early 2000s), with overarching narration by John Lincoln, the CEO of Ignite Visibility.

There’s Danny Sullivan, the founder of Search Engine Watch (this very website!) and co-founder of Search Engine Land; Rand Fishkin, the ‘Wizard of Moz’; Rae Hoffman a.k.a ‘Sugarrae’, CEO of PushFire and one of the original affiliate marketers; Brett Tabke, founder of Pubcon and Webmaster World; Jill Whalen, the former CEO of High Rankings and co-founder of Search Engine Marketing New England; and Barry Schwartz, CEO of RustyBrick and founder of Search Engine Roundtable.

The documentary also features a section on former Google frontman Matt Cutts, although Cutts himself doesn’t appear in the movie in person.

Each of them tells the tale of how they came to the search industry, which is an intriguing insight into how people became involved in such an unknown, emerging field. While search and SEO turned over huge amounts of revenue in the early days – Lincoln talks about “affiliates who were making millions of dollars a year” by figuring out how to boost search rankings – there was still relatively little known about the industry and how it worked.

Danny Sullivan, for instance, was a newspaper journalist who made the leap to the web development in 1995, and began writing about search “just because [he] really wanted to get some decent answers to questions about how search engines work”.

Jill Whalen came to SEO through a parenting website she set up, after she set out to bring more traffic to her website through search engines and figured out how to use keywords to make her site rank higher.

Rae Hoffman started out in the ‘long-distance space’, making modest amounts from ranking for long-distance terms, before she struck gold by creating a website for a friend selling diet pills which ranked in the top 3 search results for several relevant search terms.

“That was probably my biggest ‘holy shit’ moment,” she recalls. “My first commission check for the first month of those rankings was more than my then-husband made in a year.”

Rand Fishkin, the ‘Wizard of Moz’, relates the heart-rending story of how he and his mother initially struggled with debt in the early 2000s when Moz was still just a blog, before getting his big break at the Search Engine Strategies conference and signing his first major client.

The stories of these industry pioneers give an insight into the huge, growing, world-changing phenomenon that was SEO in the early days, back when Google, Lycos, Yahoo and others were scrambling to gain the biggest index, and Google would “do the dance” every five to eight weeks and update its algorithms, giving those clever or lucky enough to rank high a steady stream of income until the next update.

Google’s algorithm updates have always been important, but as later sections of the documentary show, certain algorithms had a disproportionate impact on businesses which Google perhaps should have done more to mitigate.

Google and webmasters: It’s complicated

“Larry [Page] and Sergey [Brin] were fairly antagonistic to SEOs,” Brett Tabke recalls. “The way I understood it, Matt [Cutts] went to Larry and said… ‘We need to have an outreach program for webmasters.’ He really reached out to us and laid out the welcome mat.”

Almost everyone in the search industry knows the name of Matt Cutts, the former head of Google’s webspam team who was, for many years, the public face of Google. Cutts became the go-to source of information on Google updates and algorithm changes, and could generally be relied upon to give an authoritative explanation of what was affecting websites’ ranking changes and why.

What we learned from SEO: The Movie

Matt Cutts in an explanatory video for Google Webmasters

However, even between Matt Cutts and the SEO world, things weren’t all sunshine and roses. Rand Fishkin reveals in SEO: The Movie how Cutts would occasionally contact him and request that he remove certain pieces of information, or parts of tools, that he deemed too revealing.

“We at first had a very friendly professional relationship, for several years,” he recollects. “Then I think Matt took the view that some of the transparency that I espoused, and that we were putting out there on Moz, really bothered him, and bothered Google. Occasionally I’d get an email from him saying, ‘I wish you wouldn’t write about this… I wish you wouldn’t invite this person to your conference…’ And sometimes stronger than that, like – ‘You need to remove this thing from your tool, or we will ban you.’”

We’ve written previously about the impact of the lack of transparency surrounding Google’s algorithm updates and speculated whether Google owes it to SEOs to be more honest and accountable. The information surrounding Google’s updates has become a lot murkier since Matt Cutts left the company in 2014 (while Cutts didn’t formally resign until December 2016, he was on leave for more than two years prior to that) with the lack of a clear spokesperson.

But evidently, even during Cutts’ tenure with Google, Google had a transparency problem.

In the documentary, Fishkin recalls the general air of mystery that surrounded the workings of search engines in the early days, with each company highly protective of its secrets.

“The search engines themselves – Google, Microsoft, Yahoo – were all incredibly secretive about how their algorithms worked, how their engines worked… I think that they felt it was sort of a proprietary trade secret that helped them maintain a competitive advantage against one another. As a result, as a practitioner, trying to keep up with the search engines … was incredibly challenging.”

This opaqueness surrounding Google’s algorithms persisted, even as Google grew far more dominant in the space and arguably had much less to fear from being overtaken by competitors. And as Google’s dominance grew, the impact of major algorithm changes became more severe.

SEO: The Movie looks back on some of Google’s most significant updates, such as Panda and Penguin, and details how they impacted the industry at the time. One early update, the so-called ‘Florida update’, specifically took aim at tactics that SEOs were using to manipulate search rankings, sending many high-ranking websites “into free-fall”.

Barry Schwartz describes how “many, many retailers” at the time of the Florida update suddenly found themselves with “zero sales” and facing bankruptcy. And to add insult to injury, the update was never officially confirmed by Google.

Fast-forward to 2012, when Google deployed the initial Penguin update that targeted link spam. Once again, this was an update that hit SEOs who had been employing these tactics in order to rank very hard – and moreover, hit their client businesses. But because of the huge delay between one Penguin update and the next, businesses which changed their ways and went on the metaphorical straight and narrow still weren’t able to recover.

“As a consultant, I had companies calling me that were hit by Penguin, and had since cleaned up all of their backlinks,” says Rae Hoffman.

“They would contact me and say, ‘We’re still not un-penalized, so we need you to look at it to see what we missed.’ And I would tell them, ‘You didn’t miss anything. You have to wait for Google to push the button again.’

“I would get calls from companies that told me that they had two months before they were going to have to close the doors and start firing employees; and they were waiting on a Penguin update. Google launched something that was extremely punitive; that was extremely devastating; that threw a lot of baby out with the bathwater… and then chose not to update it again for almost two years.”

These recollections from veteran SEOs show that Google’s relationship with webmasters has always been fraught with difficulties. Whatever you think about Google’s right to protect its trade secrets and take actions against those manipulating its algorithms, SEOs were the ones who drove the discussion around what Google was doing in its early days, analyzing it and spreading the word, reporting news stories, featuring Google and other search companies at their conferences.

To my mind at least, it seems that it would have been fairer for Google to develop a more open and reciprocal relationship with webmasters and SEOs, which would have prevented situations like the ones above from occurring.

Where is search and SEO headed in the future?

It’s obviously difficult to predict what might be ahead with absolute certainty. But as I mentioned in the introduction, what I like about the ‘future of search’ predictions in SEO: The Movie is that they come from veterans who have been around since the early days, meaning that they know exactly where search has come from, and have a unique perspective on the overarching trends that have been present over the past two decades.

As Rae Hoffman puts it,

“If you had asked me ten years ago, ‘Where are we going to be in ten years?’ Never would I have been able to remotely fathom the development of Twitter, or the development of Facebook, or that YouTube would become one of the largest search engines on the internet.”

I think it’s also important to distinguish between the future of search and the future of SEO, which are two different but complimentary things. One deals with how we will go about finding information in future, and relates to phenomena like voice search, visual search, and the move to mobile. The other relates to how website owners can make sure that their content is found by users within those environments.

Rand Fishkin believes that the future of SEO is secure for at least a few years down the line.

“SEO has a very bright future for at least the next three or four years. I think the future after that is more uncertain, and the biggest risk that I see to this field is that search volume, and the possibility of being in front of searchers, diminishes dramatically because of smart assistants and voice search.”

Brett Tabke adds:

“The future of SEO, to me, is this entire holistic approach: SEO, mobile, the web, social… Every place you can put marketing is going to count. We can’t just do on-the-page stuff anymore; we can’t worry about links 24/7.”

As for the future of search, CEO of Ignite Visibility John Lincoln sums it up well at the very end of the movie when he links search to the general act of researching. Ultimately, people are always going to have a need to research and discover information, and this means that ‘search’ in some form will always be around.

“I will say the future of search is super bright,” he says. “And people are going to evolve with it.

“Searching is always going to be tied to research, and whenever anybody needs a service or a product, they’re going to do research. It might be through Facebook, it might be through Twitter, it might be through LinkedIn, it might be through YouTube. There’s a lot of different search engines out there, and platforms, that are always expanding and contracting based off of the features that they’re putting out there.

“Creating awesome content that’s easy to find, that’s technically set up correctly and that reverberates through the internet… That’s the core of what search is about.”

SEO: The Movie is definitely an enjoyable watch and at 40 minutes in length, it won’t take up too much of your day. If you’re someone who’s been around in search since the beginning, you’ll enjoy the trip down Memory Lane. If, like me, you’re newer to the industry, you’ll enjoy the look back at where it came from – and particularly the realization that there some things which haven’t changed at all.

PWA-1-1024x705.png

Progressive Web Apps versus Android Instant Apps: Which is better for marketers?

Much has been made of the fight between mobile apps and the mobile web, but the line between the two is no longer as clear-cut as it used to be.

Broadly speaking, a mobile-friendly or mobile-responsive website is less costly and time-consuming to develop than a native mobile app, and tends to attract a wider audience – it’s quick to access, with no downloading or storage required.

Native mobile apps, meanwhile, tend to offer a better user experience and see more engagement from a dedicated core of users who are loyal enough to download a company’s app and come back to it time and time again.

But in the last couple of years, two hot new contenders have been added to the mix which aim to combine some of the best features of the mobile web and the app world for a better all-round mobile experience. They are: Progressive Web Apps (PWAs), and Android Instant Apps.

Image via Google Developers

Both Progressive Web Apps and Android Instant Apps are Google initiatives that put a new spin on the traditional mobile app. Both aim to provide a faster-loading, slimmed-down mobile experience; so you can be forgiven for wondering what exactly the difference is between the two.

In this article I’ll sum up the key features of Progressive Web Apps and Instant Apps, look at the differences between the two, and examine which offers a better proposition for businesses who are considering investing in one or the other.

What are Progressive Web Apps?

Andy Favell recently wrote a great piece for Search Engine Watch about the latest developments with Progressive Web Apps in the wake of Google I/O. In it, he explained:

“Progressive Web Apps are a Google innovation designed to combine the best features of mobile apps and the mobile web: speed, app-like interaction, offline usage, and no need to download anything.”

Google’s Developer page about Progressive Web Apps describes PWAs as “user experiences that have the reach of the web and are reliable, fast and engaging”. While at base PWAs are mobile webpages, they are designed to act and feel like apps, with fast loading and offline usage.

This immediately eliminates one of the biggest drawbacks of the mobile web: that mobile web pages depend on an often-shaky data connection that can lead to a poor experience and long, frustrating load times.

Progressive Web Apps versus Android Instant Apps: Which is better for marketers?

Image via Google Developers

Progressive Web Apps can also be saved to a user’s home screen, so that they can be launched with the tap of an icon just like a regular app can.

Google encourages developers to build Progressive Web Apps to an established standard, which when met, will cause Chrome to prompt the user to add the PWA to their home screen.

Brands who have already jumped on the PWA bandwagon include Twitter (whose PWA, Twitter Lite, sees 1 million daily visits from users’ homepage icons), Forbes, Expedia, Alibaba, the Washington Post, and even former native app-only companies like Lyft.

PWAs already offer many traits that we associate with native apps, including push notifications, geolocation, access to device features like the camera and microphone, and as mentioned above, offline working and icons on the home screen.

At the same time, they give organizations access to the benefits of the mobile web including easy discoverability and shareability (just send a link), universal access regardless of device (no need to release a separate iOS or Android app – although PWAs don’t quite have full functionality on iOS yet; more on that later), and the ability to bookmark individual links.

This sounds like a very compelling proposition for companies who aren’t sure whether to invest in a mobile site or a mobile app, or who want to significantly improve the experience of their mobile site for users.

So why did Google, after already having developed Progressive Web Apps, go on to launch Android Instant Apps in 2016? What is the difference between the two?

What are Android Instant Apps?

Android Instant Apps are fully-fledged native Android apps that are designed to work in a very specific way. Like Progressive Web Apps (or any mobile site, for that matter) they can be shared via a link, which when opened will give the recipient access to a stripped-down version of the app.

So, in the example that Google used at I/O in 2016, one user could send another a link to the recipe section of the Buzzfeed Video app, who would then be able to open it and access the part of the app that was linked to – in this case, recipe videos – without downloading it.

Progressive Web Apps versus Android Instant Apps: Which is better for marketers?

Screencap via Android Developers on YouTube

If they wanted to access the rest of the app, they would need to then download the full version, but this could be done easily without performing an additional search in the Play store.

Android Instant Apps are designed to be effectively the same as using a regular Android app, to the point where users may not even notice that they are using the feature. The only indicator that they are accessing an Instant App is a simplified app interface.

Apart from Buzzfeed, brands known to be using Instant Apps include The New York Times Crossword, Periscope, Viki (a video streaming service for Asian TV and film), football app Onefootball and video hosting service Vimeo.

Gif of Android Instant apps from various brands displayed on smartphone screens

Some of the brands currently using Android Instant Apps, including Onefootball, Vimeo and The New York Times. Image via Android Developers Blog

Android Instant Apps set out to tackle many of the same problems as Progressive Web Apps: they are designed to launch quickly, provide a user-friendly interface, and avoid cumbersome and data-costly downloads.

The feature is designed as an upgrade to existing Android apps, rather than being an additional app that companies need to develop. This is good news for organizations who already have an Android app, and for those who do, upgrading probably seems like a no-brainer.

But for those who might not have an app yet, do Instant Apps make a persuasive enough case by themselves for developing an Android app? Or might they be better off putting their time into developing a Progressive Web App?

Progressive Web Apps versus Android Instant Apps

On an individual feature basis, here is how Progressive Web Apps and Android Instant Apps compare to one another:

Progressive Web Apps Android Instant Apps
App-like interface App-like interface
Offline usage Offline usage
Fast loading Fast loading
No need to download an app/visit the app store No need to download an app/visit the app store

✘ Unless you want to access the full version of the app

Shareable via a link Shareable via a link
Icon on the home screen Icon on the home screen
✘ Lacks integration with some smartphone features (e.g. flashlight, contacts, Bluetooth, NFC) All the features of a native app
✘ Not yet supported by every OS (PWAs can be used on iOS/Safari and Windows/Microsoft Edge but have no offline functionality or push notifications) ✘ Android only
Can be crawled by search engines ✘ Not discoverable by search engines
No need to develop a fully-fledged app

✘ But you do still need to develop a web app that meets Google’s standards

✘ Need to develop a fully-fledged Android app

Unless you already have one, in which case you can just upgrade

In that list, you may have seen some features which especially appeal to you, some which might be deal-breakers and have put you off one option or the other, or some “cons” which aren’t enough of a deal-breaker to put you off.

Point-for-point, however, the two look about equal. So in the interests of settling the debate: which one is the better option for marketers?

Which is better for marketers: Progressive Web Apps or Android Instant Apps?

Well… Sorry to let you down after you’ve made it this far, but the issue isn’t quite as clear-cut as I’ve framed it to be.

As with the “mobile app versus mobile web” debate, no one option is inherently better than the other (although one can be cheaper or quicker to develop than the other), because it all depends on the needs of your brand and what you want your mobile experience to deliver.

What PWAs and AIAs have done is mitigate some of the biggest drawbacks of the mobile web and mobile apps, respectively, so that it’s possible to almost have the best of both worlds no matter what you decide.

If you’re trying to decide between building a regular mobile site (whether mobile-optimized, mobile-friendly or mobile-first) or a PWA, a Progressive Web App is a no-brainer. And if you already have an Android app (or were going to build one), upgrading to an Instant App would bring a lot of additional benefits.

Progressive Web Apps versus Android Instant Apps: Which is better for marketers?

Image via Android Developers

The lack of iOS support for both is an obvious drawback, although in this respect PWAs just edge out, as Safari is reported to be considering support for Service Workers, the feature that enables PWAs’ offline usage and push notifications. (Chrome, Firefox and Opera all currently support Service Workers, and Microsoft Edge is in the process of developing support).

Ultimately, the best solution might be a combination of several. Google Developer Advocate Dan Dascalescu points out in his article ‘Why Progressive Web Apps vs. native is the wrong question to ask’ that “if you already have a product, you already have an app, a web presence, or both, and you should improve both. If you don’t have a product, then if you have the resources to build native Android + native iOS + web apps, and keep them in sync, go for it.”

If you don’t need Android-specific native features, he reasons, then you can cover your bases with the combination of a PWA and a native iOS app. Though in some cases, building a PWA can lead to increased adoption even on iOS; AliExpress, Alibaba’s answer to eBay, saw an 82% increase in conversion rate on iOS after launching a Progressive Web App.

Progressive Web Apps have been around and available to organizations a little longer than Android Instant Apps, so there are a few more use cases and examples of why they work than there are for Instant Apps. Over the next year or so, I predict that we’ll see wider adoption of Instant Apps, but only from those brands who had already developed Android native apps anyway.

Ultimately, for those companies for whom developing a native Android app makes sense, nothing has really changed. Companies who were undecided between investing in mobile web versus a native app may have more reasons to plump for mobile web now that Progressive Web Apps have come along – especially once PWAs have full support in Safari and Microsoft Edge.

I can see PWAs becoming the more widespread choice for organizations once they work across all devices, as they truly do combine the best features of mobile web and apps, while also being universally accessible. But they’re not going to eliminate the need for apps entirely.

The upshot of it all is that whether organizations adopt Progressive Web Apps or Android Instant Apps, users will get a better experience – and that benefits everyone.

 

This article was originally published on our sister site, ClickZ, and has been reproduced here for the enjoyment of our audience on Search Engine Watch.

Should Google be more transparent with its updates?

It might seem hard to recall now, but there was a time when Google would regularly announce updates to its ranking algorithms, confirming what they were and how they would affect websites.

During these halcyon days, information about Google ranking updates was generally delivered via Google engineer and head of Google’s Webspam Team Matt Cutts, who was to many marketers the public face of Google.

As someone who was involved in helping to write the search algorithms himself, Matt Cutts was an authoritative voice about Google updates, and could be depended on to provide announcements about major algorithm changes.

Since Cutts’ departure from Google, however, things have become a lot more murky. Other Google spokespeople such as Gary Illyes and John Mueller have been less forthcoming in confirming the details of algorithm updates, and the way that Google makes updates has become less clearly defined, with regular tweaks being made to the core algorithm instead of being deployed as one big update.

Occasionally Google will go on record about an upcoming major change like penalties for intrusive interstitials or a mobile-first search index, but this has become the exception rather than the rule. A glance down Moz’s Google Algorithm Change History shows this trend in action, with most recent updates referred to as “Unnamed major update” or “Unconfirmed”.

The world of SEO has adapted to the new status quo, with industry blogs fervently hunting for scraps of information divulged at conferences or on social media, and speculating what they might mean for webmasters and marketers.

But does it have to be this way? Should we be taking Google’s obscurity surrounding its updates for granted – or, given the massive influence that Google holds over so many businesses and websites, are we owed a better level of transparency from Google?

A “post-update” world

At last month’s SMX West search marketing conference, the topic of ‘Solving SEO Issues in Google’s Post-Update World’ was a key focus.

But even before SMX West took place, the issue of Google’s lack of transparency around updates had been brought front and centre with Fred, an unnamed and all but unconfirmed ranking update from Google which shook the SEO world in early March.

Fred had an impact on hundreds of websites which saw a sudden, massive drop in their organic search rankings, leaving website owners and SEOs scrambling to identify the cause of the change.

But Google consistently refused to go on record about the algorithm update and what was causing it. It only gained the name ‘Fred’ thanks to a flippant comment made by Google’s Gary Illyes that “From now on every update, unless otherwise stated, shall be called Fred”.

When pressed about Fred during a Google AMA session at SMX West, Illyes replied that the details about what Fred targeted could be found “in the webmaster guidelines”, but declined to give more specifics.

After the Fred update hit, reports surfaced that the algorithm change seemed to be targeting websites with poor link profiles, or those that were ad-heavy with low-value content.

Evidently, the websites affected were engaging in poor SEO practices, and it can be argued that sites who do this shouldn’t be surprised when they are hit with a ranking penalty by Google.

However, if Google wants to clean up the web by rewarding good practices and punishing bad ones – as its actions would suggest – then wouldn’t it be more beneficial to confirm why websites are being penalised, so that their owners can take steps to improve? After all, what’s the point of a punishment if you don’t know what you’re being punished for?

On the other hand, you could argue that if Google specified which practices webmasters were being punished for, this would only help bad actors to avoid getting caught, not provide an incentive to improve.

The pros and cons of Google transparency

In the wake of Google Fred, I asked the Search Engine Watch audience on Twitter whether they thought that Google owed it to its users to be more transparent.

Several people weighed in with strong arguments on both sides. Those who agreed that Google should be more transparent thought that Google owed it to SEOs to let them know how to improve websites.

Additionally, if Google expects website owners to make their sites more user-friendly, then maybe Google should be informing them what it thinks the user wants.

We’ve already seen how this can work in practice, with Google’s mobile-friendly ranking signal giving webmasters an incentive to improve their mobile experience for users.

Others argued that with so many bad actors and black hat SEOs already trying to abuse the system, complete Google transparency would lead to chaos, with people gaming the system left, right and center.

One Twitter user made an interesting point that Google might not necessarily want to help SEOs. At the end of the day, all SEOs are trying to game the system to some extent. Search engine optimization is a game of finding the right combination of factors that will allow a website to rank highly.

Some play by the rules and others cheat, but at the end of the day, there is an element of manipulation to it.

We have a tendency to assume that Google and SEOs – at least of the white hat variety – are on the same side, working to achieve the same goal of surfacing the most relevant, high quality content for users. By that logic, Google should help good SEOs to do their job well by disclosing details of algorithm updates.

But if Google and search specialists aren’t really on the same side, then what obligation does Google have to them?

Is obsessing about updates missing the point?

Maybe all of this debate about algorithm transparency is missing the point. If we agree that website owners should be giving users the best experience possible, then perhaps they should be concentrating on that rather than on the “game” of trying to rank highly in Google.

Michael Bertini, Online Marketing Consultant and Search Strategist at iQuanti and a long-time consultant on all things search, believes that website owners should do exactly that.

“In all my years doing this with both black hat and white hat methods, the best thing anyone could ever do is to do things for the end-user, and not for Google.

“Have you ever Google searched something in the morning and then by noon, it’s dropped a position?  This happens all the time. Granted it mostly happens on page three and above, but every once in a while we do see it on page one.

“What I tell my team and clients is this: if Google makes a change in the algorithm or you notice a drop in your rankings or even in increase in your rankings – don’t take this as permanent.”

Bertini also believes that anyone who is not actively engaging in bad SEO practices should have nothing to fear from a Google algorithm update.

“So long as you’re not keyword stuffing, buying links, building links from private networks,  purchasing social followers or shares, running traffic bots, or any other tactics that could come off as trying to trick Google… you should be fine.

“Those who have to worry about algorithmic updates are usually those who are always looking for a way to manipulate Google and the rankings.”

How well do you know Search Engine Watch? The SEW Friday quiz

How well do you know Search Engine Watch?

Following the success of our previous Easter trivia quiz, we decided to mix it up again this Friday with another quiz – this time testing how well you’ve been paying attention to the content we’ve been publishing on Search Engine Watch this week.

All of our questions (bar one, for fun!) are drawn from the past week’s worth of content, including last week’s search news roundup. So brush up and give it your best shot!

bookstore-near-me-1024x406.png

Four most interesting search marketing news stories of the week

We’re back with our weekly round-up of the most interesting search marketing news stories from around the web.

I hope you all enjoyed last Friday’s Easter search trivia quiz, and if you haven’t had a chance to test your knowledge yet, be sure to have a go and share your score with us on social media!

This week: a look at the newly-relaunched Google Earth and what it could mean for marketers, and a study has shown that 45% of marketers say their biggest difficulty with Schema.org markup is proving its value.

Plus, Google’s new “suggested clip” feature in search results shows how far its ability to search within videos has improved, and a new menu of Partner-only Features on Google’s Developer Blog hints at some exciting things to come.

Relaunched Google Earth introduces 3D local maps, visual storytelling opportunities

Google has just unveiled a stunning relaunch of Google Earth, with a wealth of new features and information to explore. On Search Engine Watch this week, Clark Boyd gave us a tour of the new Earth, including a look at how marketers can take advantage of the visual storytelling opportunities it presents, and what it means for local search, where “near me” searches will activate a 3D local map featuring business names, photographs and contact details.

45% of marketers have difficulty showing the value of Schema markup

A recent survey carried out by Schema App, a provider of tools to help marketers use Schema markup, has provided some insight into the difficulties that marketers encounter when using Schema markup.

Schema markup is often touted as a killer search tactic which is nevertheless seeing very little uptake among website owners. It can vastly improve the look of websites on the SERP with the addition of rich data, and it is integral to a number of Google features like featured snippets.

But according to Schema App’s survey, 45% of marketers say they have difficulty in “showing the value of doing Schema markup – reporting the impact and results”. Forty-two percent struggle with maintaining the ‘health’ of their markup when Google makes changes, while 40% cited difficulties in developing a strategy around what to mark up with Schema.

Meanwhile, nearly a quarter of respondents (24%) said they had difficulty understanding Schema markup vocabulary at all.

Four most interesting search marketing news stories of the week

Google shows “suggested clip” feature in search results

Google is continually improving its ability to search within a video, and to surface a particular search result within the content of a video. In a previous search news roundup we reported on the fact that Google’s machine learning technology can now recognize objects within videos, as demonstrated at Google’s Next Cloud conference in early March.

Then this week, Ryan Rodden of Witblade reported that Google is now showing suggested video clips in search results for particular queries:

Four most interesting search marketing news stories of the week

Image: Witblade

The suggested clip appeared in a query for “blur out text in imovie”, highlighting a suggested clip of 25 seconds in the middle of a how-to video. While it’s unknown how accurate this result was for the query, it shows that Google is making bold inroads into searching within video and is treating video like other kinds of content to be crawled, indexed and presented as a Featured Snippet.

Given the huge rise, and popularity, of video of all forms in marketing, social media and publishing at the moment, it’s a smart move and something we can probably expect to see more of in future.

Google adds extensive new menu of Partner-only Features

Google’s Partner-only Features are a forum for it to debut certain search features to a select group of approved and certified providers, before they are rolled out on a wider scale. Aaron Bradley noted in the Semantic Search Marketing Google+ group this week that Google has just added a huge new menu in the Partner-only Features section of its documentation.

The new menu features eight sub-sections including “Carousels”, “Indexing API”, “Jobs” and “Live coverage”.

Four most interesting search marketing news stories of the week

All of the links currently lead to a 404 error, but it could be an interesting insight into what’s to come from Google.

apple-siri-370x229.jpeg

The State of Schema.org: What are the biggest challenges surrounding Schema markup?

Using Schema.org markup, a form of structured data which helps search engines to interpret your webpages, is widely agreed to be beneficial from an SEO standpoint.

While it may not correlate directly to an increase in ranking, using Schema.org markup allows search engines to pull through rich snippets and rich data like images, reviews and opening hours, making your site appear more attractive on the SERP and thereby increasing click-through.

Schema.org markup is also becoming increasingly important in the age of voice search, acting as a signpost that points digital assistants towards the information that will correctly answer a user’s voice query. Voice queries depend heavily on implied context, and Schema markup can help give that context to an otherwise ambiguous page of text.

But while the advantages of using Schema.org seem obvious enough on paper, actually implementing it can be much more challenging. As a result, a startlingly small minority of website owners make use of Schema.org.

The figures vary as to exactly how many; Schema.org’s website claims that “over 10 million websites” use Schema.org markup, which translates into less than one percent of all websites; an investigation by ACM Queue put the figure at 31.3%, while a study by Bing and Catalyst found that just 17% of marketers use Schema.org markup.

Either way, even the highest estimate of Schema.org adoption still comes in at less than a third of websites.

With Schema.org being a well-known advanced search technique with well-established benefits, what is holding SEOs and website owners back from implementing it?

The state of Schema markup

Schema App – a provider of tools to help digital marketers use Schema markup – recently ran a survey which sheds some light on this question. The study, ‘The State of Schema Markup’, surveyed users of Schema.org markup on the size and type of their business, how frequently they maintained their markup, the challenges they experienced in using Schema.org, and any tools they used to tackle these problems.

It’s worth noting that the survey results were drawn from a fairly small sample of only 75 respondents, which limits our ability to generalize them too widely, but they nevertheless give some interesting insights into the use of Schema markup among marketers.

Perhaps surprisingly, respondents from the smallest companies – those with five or fewer employees – made up the largest percentage of Schema.org users, with two-fifths of respondents reporting that they carry out Schema markup for companies of just five employees or fewer.

The State of Schema.org: What are the biggest challenges surrounding Schema markup?

It’s hard to say exactly why this is – maybe smaller, more agile companies are better at keeping up to date with advanced search tactics; or maybe they will do whatever it takes to stand out on the SERP in order to increase their competitivity with larger organizations.

The second-largest group, conversely, was made up of companies with more than 1,000 employees, although this group still only amounted to 13% of respondents.

A third of respondents to the survey came from digital marketing agencies, while 28% said they came from small or medium businesses. Sixteen percent of respondents were from enterprise organizations, while a fraction under ten percent were from start-up companies.

The job titles of respondents to the State of Schema Markup survey revealed that it’s not just SEOs who are doing Schema markup. While more than half of respondents to the survey were search specialists (either SEO specialists – 45% – or Heads of Search – 8%), digital marketers, business owners, CTOs and even CEOs were among the remaining 47%.

The State of Schema.org: What are the biggest challenges surrounding Schema markup?

Another interesting finding was the frequency at which respondents update their Schema markup. Judging by the frequency of posts to the official Schema.org blog, updates to Schema.org are fairly sporadic, sometimes coming two or three months apart, other times going six or seven months without an update.

Google updates like the recent introduction of rich results for podcasts to the SERP can also give marketers an incentive to add new coding, as can regular site maintenance. However, I was surprised that close to a fifth of respondents (19%) said that they update their Schema markup every day.

A further 31% of respondents update their markup weekly, while the largest proportion (39%) update their markup once a month. An unstated percentage (which visually looks to be about 8%) say they work on their markup once only.

The biggest challenges surrounding Schema markup

Anyone who has tried to tackle Schema.org markup (or write a blog post about it), particularly without much of an understanding of code, knows that implementing it can be easier said than done. Even tools like Google’s Structured Data Markup Helper have their limitations, making it necessary to understand markup if you want to fill in the gaps.

This reality was reflected in the comments from marketers who took the Schema App survey. One respondent wrote,

“When I first learned about the existence of schema, I was so confused on how to implement it. I am not a developer. After trying many online generator tools and finding them unsatisfactory, I turned to my programmer hoping he could take over this task for me. He explained it was a different code altogether than what he writes. I felt overwhelmed when he confided he had no idea at all how to do it, even after spending a little time looking at it.”

Another respondent observed that “The examples given on schema.org were not clear and sometimes it seemed they did not follow even their own rules.” A third described Schema.org markup as feeling “a bit like witchcraft”.

Although a number of search blogs like Moz, WordStream, Yoast and indeed yours truly have set out to write guides on how to use Schema.org markup, there are still a limited number of resources available to help with this process; and comments on the State of Schema Markup survey reveal that many of those which do exist are flawed.

“Worse is that some of the schema is supported … but not in the Structured Data Testing Tool,” one respondent wrote.

Another wrote that, “It’s still very much a trial and error process for me as I find that some of the guides out there, when put through Google’s tool, don’t actually parse correctly. Very frustrating…”

Overall, the most widely agreed-upon problem experienced by survey respondents was “Showing the value of doing schema markup – reporting the impact and results” (reported by 45%). Close behind this was “Maintaining ‘health’ of Schema markup when Google makes changes” (reported by 42%).

Two-fifths of respondents cited difficulties in developing a strategy around what to mark up with Schema, while 37% struggled with how to implement Schema markup at scale – few solutions exist for the bulk markup of webpages, which can create huge challenges for companies with large websites, on top of the difficulties that we’ve covered already.

The State of Schema.org: What are the biggest challenges surrounding Schema markup?

 Although it ranked near the bottom of the list of concerns cited by survey respondents, close to a quarter (24%) of respondents still cited “Understanding Schema markup vocabulary” as one of their biggest obstacles to carrying out Schema markup.

And as we’ve seen, this is coming from a group of marketers of whom the majority use Schema markup habitually – no wonder the wider marketing community is having trouble getting on board with Schema.org.

Tools for tackling Schema markup

Finally, respondents were asked what tools they use to solve the problems they experience with Schema markup, from a range of options including WordPress plugins, Wordlift, Web JSON-LD generators, Schema App’s own tool, or no tools at all.

The last of these options was the most common by far, with 40% of respondents asserting that they do all of their Schema markup manually. I can’t help but notice that this corresponds exactly to the percentage of respondents from small companies with 5 or fewer employees – I wonder if there could be some correlation there.

Fifteen percent of respondents said they make use of Schema App’s own tool, while 13% use WordPress plugins. Another 8% use Web JSON-LD generators, while 24% use tools other than those listed in the survey.

The State of Schema.org: What are the biggest challenges surrounding Schema markup?

One business owner wrote that they tend to solicit help on Schema markup from online communities: “I ask for help in online communities and usually get answers. The definitions and examples have become better over time in both schema.org and Google.”

A Head of Search at an enterprise company wrote that they use “Internally developed tools and markup checkers that were developed for our specific needs.”

For those two-fifths of respondents who opt to do their Schema markup without the aid of automated tools, this could be due to a lack of technical resources, a lack of confidence in automated solutions, or perhaps because they simply don’t know that these tools exist.

But we can clearly see that there is a demand in the marketing and search community for more accurate and helpful resources surrounding Schema.org, whether these be in the form of web generators, apps, or how-to guides and tutorials.

Perhaps Schema.org needs to take the initiative to make its markup language more accessible by creating these, or perhaps they will be created by an interested third party. Either way, without them, we are unlikely to see the dial shift much on the uptake of Schema markup among marketers and SEOs, no matter how useful it is.

Test your knowledge! The Search Engine Watch Easter trivia quiz

Happy Easter, Search Engine Watch readers!

In honour of the season – and a bit of spring-like weather finally starting to creep into the air (well, at least here in the UK…) – we have put together a light-hearted search trivia quiz to test your knowledge.

What was the name of the project launched by Sergey Brin and Larry King at Stanford, which eventually became Google? What does the name of the Chinese search engine “Baidu” translate to in English? What year did mobile web traffic finally overtake desktop traffic globally – or hasn’t it yet?

Test your knowledge of all these questions and more in the Search Engine Watch Easter quiz. And don’t forget to share your score with us in the comments or on social media!