All posts by Clark Boyd


What do we know so far about Google’s new homepage?

Google has released a new, feed-based mobile homepage in the US, with an international launch due in the next two weeks.

This is perhaps the most drastic and significant update of the homepage (the most visited URL globally) since Google’s launch in 1996.

The upgraded, dynamic entry point to the world’s biggest search engine will be available initially on mobile devices via both the Google website and its mobile apps, but will also be rolled out to desktop.

Let’s take a look at what’s changing and how, as well as what it might mean for marketers.

What’s different about the new homepage?

Google’s new homepage allows users to customize a news feed that updates based on their interests, location, and past search behaviors.

On the website (via a mobile device), there are now four icon-based options: Weather, Sports, Entertainment, and Food & Drink.

The ‘Weather’ and ‘Food & Drink’ options can be used straight away, as they take the user’s location data to provide targeted results. The ‘Sports’ and ‘Entertainment’ options require a little more customization before users can benefit from them fully. Without this, Google will just serve up popular and trending stories within each category.

In the example below, I tapped on the ‘Sports’ icon, then selected to follow a baseball team, the Boston Red Sox. Based on this preference, Google then knows to show me updates on this team on my homepage. The results varied in their media format, with everything from Tweets to GIFs and videos shown in my feed.

What do we know so far about Google’s new homepage?

This means that rather than encountering the iconic search bar, Google logo, and the unadorned white interface we have all become accustomed to, each user’s feed will be unique. As I start to layer on more of the topics I am interested in, Google gains more information with which to tailor my feed.

On the Google mobile app, based on my selection above, my homepage looks as follows:

What do we know so far about Google’s new homepage?

This is quite a big departure and is an experience we should expect the website to mirror soon. For now, the latter retains enough of the old aesthetic to be recognizable, but the app-based version is more overt in its positioning of suggested content.

The trusty search bar is still there, but users are encouraged to interact with their interests too. The interface is designed for tapping as well as typing.

Sashi Thakur, a Google engineer, has said of the launch,

“We want people to understand they’re consuming information from Google. It will just be without a query.”

It is essentially an extension of the functionality that has been available in Google’s Android app since December. Google will also continue to use push notifications to send updates on traffic, weather, and sports, based on the user’s set preferences.

Why is Google launching this product now?

Google has struggled to find a significant commercial hit to rival its hugely lucrative search advertising business. That business relies on search queries and user data, so anything that leads users to spend more time on Google will be of significant value.

The same motive has led to the increased presence of Google reservations, which now allow users to make appointments for a range of services from the search results page.

As Google stated in their official announcement, “The more you use Google, the better your feed will be.”

Users type a query when they have an idea of what they want to find; Google is pre-empting this by serving us content before we are even aware of what exactly we would like to know. By offering a service that will increase in accuracy in line with increased usage, Google hopes users will get hooked on a new mode of discovering information.

What do we know so far about Google’s new homepage?

This also allows Google to incorporate a number of other initiatives it has been working on, such as fact-checking and Google Posts.

You’d be forgiven for wondering whether Google is trying to find its way into social media again. After the demise of the short-lived Google+ platform, Google has seen Facebook grow as a credible threat in the battle for digital advertising dollars.

Facebook’s algorithmic news feed has been a significant factor in its rise in popularity, and with Google Posts incorporated into this news feed, there are certainly elements reminiscent of a certain social network in Google’s new homepage initiative. Readers may also recall the launch of iGoogle in 2005, a similar attempt to add some personalization to the homepage.

That said, it seems more likely that these changes have been rolled out in response to recent launches from Amazon than as a direct challenge to Facebook.

Amazon has made an almost dizzying amount of product announcements and acquisitions of late. As a pure-play ecommerce company, their rapid growth will have been cause for consternation at Google and there is a need to respond.

Of particular interest in relation to the new Google feed is the very recent launch of Amazon Spark, a shoppable feed of curated content for Amazon Prime members. It is only available via the iOS app for now, but it will be launched on Android soon too.

What do we know so far about Google’s new homepage?

Spark is a rival to Instagram in some ways, with its very visual feed and some early partnerships with social media influencers. It is also similar to Pinterest, as it encourages users to save their favorite images for later and clearly tries to tap into the ‘Discovery’ phase that Pinterest has made a play for recently.

Amazon has also launched its ‘Interesting Finds’ stream, which works in a noticeably Pinterest-esque fashion:

What do we know so far about Google’s new homepage?

Google has taken aim at Pinterest with its ‘Similar items’ feature and its revamped visual search technology, which feeds the new Google Lens.

In Google’s announcement of the new homepage, they make use of the verbs “discover” and “explore”. Both Amazon and Pinterest have tried to shape and monetize these phases of the search-based purchase journey; Google evidently thinks its homepage needs to take on a new life if it is to compete.

Will it open new opportunities for marketers?

Almost certainly. We should view this as a welcome addition to the elements of current search strategies, with a host of new opportunities to get in front of target audiences.

Google is not launching this product because of any existential threat to its core search product, which still dominates Western markets:

What do we know so far about Google’s new homepage?

Source: Moz/Jumpshot

The update should encourage a shift in user behavior. As people get used to the new experience, they will interact with Google in new ways and marketers need to be prepared for this.

From a paid perspective, we can expect to see new options open to advertisers, but not in the immediate future.

Amazon has two innate monetization mechanisms within Spark: users have to sign up to Prime (for an annual fee) to get access and, when they do, they are served a shoppable list of results. It comes as no surprise when we are on Amazon that we will be asked if we want to buy products.

That is not always the case on Google, where the initial purpose of the news feed is to gain traction with users and encourage them to spend more time within the site.

Options for sponsored content and (almost inevitably) paid ecommerce ads will come later, once a large and engaged user base has been established.


Semantic Search: What it Means for SEO in 2017

The combination of semantics (the science of meaning in language) with search engines that process billions of queries seems a very natural one.

Semantic search has been effective, too; by understanding the intent of a query and the context of the user, the accuracy of results on search engines like Google and Bing has increased significantly.

Search engine results pages today look markedly different to their earlier iterations and, with improvements in local search, voice recognition, and machine learning, they will continue to change over the next few years too.

There is a lot of fascinating theory behind all of this, but we can sometimes focus on this to the detriment of our work today.

Significant algorithm updates like Hummingbird, or the more recent launch of RankBrain, have a big impact on users. As marketers, we need to know exactly what this means for our strategy, our expectations, and our campaign measurement.

As such, this article will focus on some real-world examples of semantic search and provide a practical framework to help marketers avail of the opportunities it brings.

Semantic search in action

Let’s start with a simple example to shed light on how semantic search works. We’ll use a common, everyday search query like [will smith]. This screenshot is what I see above the fold on desktop:

When Google processes this query, it recognizes instantaneously that I am searching for the actor and all-round entertainer Will Smith, but also that the intent of my search is unclear. Therefore, it serves a varied array of options for me to click on. I may want to read news about the Fresh Prince, I may want to see his filmography, I may want to see if he has any new albums in the pipeline. Perhaps I want to see all three.

As is highlighted on the right-hand side in the knowledge panel, Google can retrieve all of this information from its index of 808,000,000 Will Smith-related results, but also from its own vast database of information about noteworthy people and institutions.

I can help Google out here by refining my search. Next, I ask [who is he married to]:

Semantic Search: What it Means for SEO in 2017

As we can see, results are pulled to the top of the results to highlight his current and former spouse.

This is a demonstration of conversational search in action.

Just like a person would in a conversation, Google knows the ‘he’ in my question refers to Will Smith. I don’t need to state this again. Google also needs to know what the connection is between ‘he’ and both Jada Pinkett Smith and Sheree Zampino.

These may seem like minor changes, but they hint at a fundamental shift in how Google works. Factor in voice search and it is easy to see how important this conversational element is.

If we extend this out to ask about Will Smith’s music, we can start to conceptualize just how complex Google’s network of interconnected entities is:

Semantic Search: What it Means for SEO in 2017

Asking what an artist’s best song is strays into the realm of subjectivity, so Google pulls the track listing from Will Smith’s greatest hits. Or at least, I hope that’s what’s happening here. If Google genuinely thinks ‘Girls Ain’t Nothing But Trouble’ is Will Smith’s best song, I’ll lose faith in them.

In terms of natural language processing, however, this search query is now quite convoluted. In this last instance, Google has had to keep track of who we’re asking about, having deviated once already to ask who his spouse is; then pull an indirect, best-fit answer to my question about Will Smith’s best song.

Let’s try one more, then we’ll give Google a break:

Semantic Search: What it Means for SEO in 2017

You get the idea.

We’ve come an awfully long way from the exact keyword matching of just a few years ago.

Furthermore, all of this serves an important illustrative purpose and it’s one that matters for anyone that wants to rank via SEO in 2017.

Why does it matter for brands?

The technology that underpins the above answers is utilized for all queries, so it is very significant for brands. Just launching a page on a website and ‘optimizing it for SEO’ clearly isn’t going to cut it any more.

Let’s say, for argument’s sake, that I run a peanut butter e-commerce site. Logic dictates that I will want to rank first in organic search for [peanut butter]. The results from my location look like this:

Semantic Search: What it Means for SEO in 2017

We can see the same principle applied to the earlier Will Smith query, but with very different results – both in their format and their content.

I may want to rank for [peanut butter] with my e-commerce site, but unless I have a physical store I can use to rank via local listings, the chances look slim. There are a few organic results above the fold (an anomaly these days), but only one brand that actually produces the product. There is a recipe with an accompanying image, however, and a link to more images, so perhaps these formats would be a more appropriate, achievable way to get onto page one.

At the bottom of this search results page, Google actually provides some strong clues about what people are really looking for when they search for peanut butter:

Semantic Search: What it Means for SEO in 2017

These related searches are more specific and give us a good idea of which topics we should cover on our site. There is a nice variety of different topics here, all of which are worthy of more investigation.

To pick one, we’ll go with the ‘peanut butter ingredients’ route. If I search for [what is in peanut butter?], Google serves the following results:

Semantic Search: What it Means for SEO in 2017

We can already sense some opportunities for an e-commerce site either to branch out is content strategy to answer questions, or potentially to partner with a site that already ranks well for these queries.

The ‘People also ask’ list is a fantastic resource for users and SEO marketers, but we should be aware of just how dynamic that list is. It take on a concertina effect and expands based on our interactions with it.

Once more, the need to approach SEO in 2017 with an open mind is evident. We can’t control how this list will function at scale; all we can do is put ourselves in the best possible position to answer common questions.

In the screenshot below, there are two examples of how the list changes based on the questions a user clicks on. On the left-hand side, I have clicked on a protein-related question and, therefore, Google provides more protein-based questions below the original list of four. On the right-hand side, I have initially clicked on ‘Is eating peanut butter good for you?’

Semantic Search: What it Means for SEO in 2017

The ‘People also ask’ box ends up looking completely different in these two instances, which both began with the exact same query.

Note that a lot of similar questions are phrased slightly differently, but Google knows that the underlying meaning is essentially the same. As such, we don’t need to slavishly devote ourselves to answering the exact questions that receive the most searches in order to rank.

This brings with it opportunities and challenges, outlined in the four-step process below.

Four steps to rank via semantic search

We can’t control exactly which queries we will rank for, but we can certainly increase the probability that we will improve our organic visibility if we work through these four stages.


Google provides a lot of useful information via suggested search, ‘People Also Ask’, and related searches. You could use these to collect a list of direct questions that you can be certain people are asking, as a starting point.

Although keyword-level search volumes are impossible to obtain with any serious degree of accuracy now, there are still some useful tools that provide insight into search trends. Google’s own keyword planner is quite limited for SEO nowadays, but you can use PPC-based insights to help shape your content strategy.

There are also tools like Moz’s keyword planner, which are very helpful for shaping broader SEO strategies while still keeping an eye on where the search volume is.

Personally, I find Answer the Public to be a useful guide when trying to figure out all the interrelated questions and pain points consumers have when thinking about a product or service.

Collate a list of all the navigational, informational, and commercial queries related to your site, then sub-categorize them by their semantic links to each other.

From here, you can start thinking about how to structure this to ensure maximum SEO visibility.


Site structure is a fundamental aspect of semantic search performance.

You should think of your products or services as entities that each contain a multitude of connotations and associations. Build those connotations in vertically to cover a range of user needs, and link them to other entities horizontally in the site taxonomy. By mapping keyword groups or common questions to landing pages, you can ensure that each URL on your domain has a defined purpose.

Changes to site structure normally require buy-in from multiple stakeholders, so I would advise visualizing your proposed site taxonomy as early as possible.

How you present this will depend on your intended audience and how they think. For more logical thinkers, Writemaps is a great way to produce simple but effective site structure visualizations.

Semantic Search: What it Means for SEO in 2017

If you require a more conceptual approach to emphasize semantic relationships, or even the amount of internal link value you want to send to each area of the site, you can use word cluster software like Smartdraw to get your point across.


The next step is to populate your site structure with content that meets user needs. This is an effective way to think about this, because consumer needs and desires remain relatively constant, and the ideal functioning of a search engine will always seek to satiate those underlying motivations. So if you can create content to cover every aspect of the typical consumer journey, you will be rewarded.

Bear in mind what we have seen from the example above, too. Multimedia results are hugely significant, so try to include a range of assets that fit users’ (and Google’s) expectations. Most rank tracking software providers now contain products that allow us to see which types are most prevalent for different types of queries, so use these to guide your efforts.


Measurement has become a significant challenge, viewed through the lens of our old performance indicators like ranking positions, for example. It is very difficult to track individual ranking positions, as they are never static. Search results pages act like living organisms now, so we need to take a broader perspective on measurement.

Track the metrics that matter most to your business, rather than just looking at rankings. The aim should always be to use SEO to affect those metrics anyway, so incorporate them within your campaign tracking.

Moreover, the bigger ranking software companies have created their own metrics to measure SEO visibility which, when combined with what you see in your analytics dashboard, will provide a lot of insight into whether your strategy is working.

We can’t approach measurement like we used to, but we can still tell when SEO is making a positive contribution.


For more on semantic search and its ever-changing impact on the SERP, check out our round-up of five important updates to Google semantic search you might have missed.


The Ultimate Guide for an SEO-Friendly URL Structure by @clarkboyd

Follow these URL structure guidelines to set your websites up for future SEO success.

The post The Ultimate Guide for an SEO-Friendly URL Structure by @clarkboyd appeared first on Search Engine Journal.


AdWords Editor 12: Everything you need to know

Google has launched AdWords Editor 12, the latest upgrade to its essential software for sophisticated PPC practitioners.

Complete with a new look and a raft of useful features, it is a welcome upgrade and marks the biggest improvement to the platform since version 11.0 launched in 2014.

Below, we have summarized everything you need to know about AdWords Editor 12 and also delved into what this update tells us about Google’s current and future strategy.

What is AdWords Editor?

AdWords editor is a free, downloadable application that allows users to edit campaigns in minute detail outside of the AdWords platform. This has the advantage of providing more control over edits, but also the very significant ability to work on campaigns even when a user is offline.

Originally released in 2006, the pace of improvement has relented a little of late. AdWords Editor 11.0 was released way back in 2014, bringing with a raft of much-requested changes like the ability to make bulk updates to multiple ad groups or campaigns at once.

We have seen helpful improvements since then all they way up to version 11.8, particularly the ability to connect up to five AdWords accounts to one email address, added late last year.

Nonetheless, we have been kept waiting until now for an update worthy of the version 12 moniker.

So, what’s new in AdWords Editor 12?

First impressions are, as is so often the case, guided by aesthetics. Editor has a new look that aligns it with the rest of Google’s product suite, which is a surprisingly late alteration for a company so committed to consistent cosmetics.

AdWords Editor 12: Everything you need to know

The importance of this contemporary mien is confirmed by Google’s own announcement, which led with: “AdWords Editor 12 offers a fresh look and new features.”

But let’s dig deeper and get to those “new features”, as there is a lot below the surface that is worthy of examination too.

Maximize conversions bidding support:

The ‘maximize conversions’ bidding option was released last month on the web version of AdWords, so this is hardly a surprising launch in version 12. Even so, it is still very welcome and provides the option for users to allow Google’s advanced machine learning technology to set bids automatically within real-time bidding auctions. This means advertisers can get as many of their defined ‘conversions’ as possible for their daily budget.

Available at the campaign level within Editor, maximize conversions is found within the ‘Bid strategy’ drop-down list:

AdWords Editor 12: Everything you need to know

Custom rules:

AdWords Editor now includes a host of custom rules, designed to ensure advertisers follow Google’s lengthy list of best practices. Editor will now let users know if their campaigns fall below Google’s standard before they are uploaded to AdWords. This is a pretty handy insight into what Google expects and wants to see from ad campaigns.

A list of the rules included are listed in the screenshot below and, as the name suggests, there is plenty of room for customization.

AdWords Editor 12: Everything you need to know

New fields for responsive ads:

A slew of new, editable fields have been added for responsive ads, including logos, promotion text, price prefixes, and CTA text.

Increased multimedia UAC support:

Universal App Campaigns make great use of Google’s machine learning technology. Advertisers can upload their creative assets and Google automatically generates the most appropriate video or image to promote the app to users across its range of products, including the Google Display Network, Search, and YouTube. Support is now provided for up to 20 videos or images within AdWords Editor 12, a significant upgrade.

We can expect to see version 12.1 sometime very soon, so we should really view this as the beginning of a process rather than a finished product.

Evolution, not revolution

That said, there is still a sense that, for all its launch has been heralded, version 12.0 hasn’t delivered the newsworthy, paradigm-shifting features of its predecessor.

There are commonalities across the updates in Editor 12, nonetheless, and they are representative of Google’s wider business strategy.

The phrase “machine learning” invariably crops up in any Google update now and it appears in abundance in relation to the newest AdWords Editor. The application provides more control to advanced users, no doubt, with its customizable fields and filters.

This sense of control for account managers becomes evermore illusory, however, as the essential workings of the machine reside on Google’s side of the fence.

Universal App Campaigns and Maximize Conversions serve as ideal harbingers of a new, AI-led approach to bidding, targeting, and personalization. Google provides access to these features, for a price, which levels the playing field for a wider group of advertisers. The differentiating factor between these campaigns will likely come down to the human element, often led by the meticulous work done in AdWords Editor.

In that sense, this update is a very significant marker of where the industry stands in 2017. The opportunities to gain a competitive advantage through old-fashioned PPC expertise are more valuable than ever, as machine learning tightens its grip over all aspects of paid search, from account structure to creative delivery.

AdWords Editor 12 may not have introduced these notions, but it certainly serves to solidify them.


The 10 best Google Doodles of all time

Since 1998, Google has used its homepage to host an invariably inventive ‘doodle’.

The Google Doodle actually began its life as a humorous out-of-office message for the company’s co-founders, Sergey Brin and Larry Page. To let everyone know they had gone to the Burning Man festival, they placed the festival’s icon behind the second ‘o’ on their own company’s logo.

It is fitting that what has become a forum for sophisticated artistic and technical expression began life as a stick figure. We can trace the Doodle’s development over time from a simple stick man to an interactive multimedia hub that educates and entertains on a variety of subjects.

Google began experimenting with Doodles to mark historical events soon after the original Burning Man example and, such was its popularity, the Doodle became a daily fixture on the Google homepage.

Undoubtedly, Google has taken a few knocks recently. The record fine levied against it by the E.U. made global headlines, the Canadian government ruled that Google must de-index specific domains entirely, and its AI company DeepMind’s deal with the National Health Service in the UK has been ruled “illegal.”

That’s not the kind of damage a doodle can undo. These are important cases that raise probing questions for all of us.

Nonetheless, it is still worth reflecting on the positive side of Google’s contributions to society. That’s where the humble, charming Doodle comes in.

These sketches showcase Google at its best. They are a microcosm of the search giant’s philanthropic side, an insight into a company that (until recently) proudly held the mantra “Don’t be evil” at the core of its code of conduct.

A company with so much power over the public consciousness uses its homepage to highlight overlooked historical figures, educate the populace about important scientific theories, or just give us some really fun games to play.

For that, we should be grateful.

You can take a look through the expansive repository of over 2,000 Doodles here.

Within this article, we have selected just 10 of Google’s most amiable animations from through the years.

1. Claude Monet (Nov 14, 2001)

The 10 best Google Doodles of all time

For the first few years of the Doodle’s existence, it tended to appear sporadically – often to mark national holidays. That all changed in 2001 with the depiction of the Google logo in an Impressionistic style to celebrate 161 years since the French painter Claude Monet’s birth.

The shimmering effect of light in the letters and the presence of waterlilies underneath serve as elegant echoes of Monet’s trademark style. Importantly, this marked a shift in direction – both thematically and aesthetically – for the Doodle.

Other noteworthy homages to artists include Wassily Kandinsky, Carlos Mérida, Gustav Klimt, and Frida Kahlo.

2. Harriet Tubman (Feb 1, 2014)

The 10 best Google Doodles of all time

Harriet Tubman’s extraordinary life was celebrated by Google in February 2014. The Doodle features her image and a lamp, to highlight both her escape from slavery and her daring missions to rescue others from the same fate.

This feature is notable for a few reasons. In 2014, a study revealed the lack of diversity in Google’s Doodles. Although just a simple design on a search engine landing page, this was a clear reflection of the social impact Google can have. In fact, over half of all Doodles to this point were of white men.

The 10 best Google Doodles of all time

Google took this seriously and did strike a 50/50 gender balance in 2014, giving increasing prominence to non-white historical figures too. There is a notable effort to provide a broader spectrum of historical events and figures within Google’s Doodles, beginning with Harriet Tubman.

3. Alexander Calder (July 22, 2011)

The sculptor Alexander Calder is known best as the inventor of the nursery mobile. These structures sway in the wind, changing form depending on the antecedent forces that come into contact with them.

This made Calder the perfect subject for the first Doodle to be constructed entirely using the HTML5 standard. Internet browsers had been incapable of rendering such a complex media format until this point, and this design required the work of a team of engineers, artists, and illustrators.

The Doodle, to mark what would have been Calder’s 113th birthday, lulls satisfyingly when a user clicks or hovers over its component parts.

The 10 best Google Doodles of all time

This is therefore a particularly important piece of Doodle history, ushering in a new age of innovation and experimentation.

4. Charlie Chaplin (Apr 16, 2011)

To celebrate the 122nd anniversary of Charlie Chaplin’s birth, one of Google’s resident doodlers donned a moustache and hat to pay tribute to the great comic genius of the silent movie era.

This was the first live action Doodle and it really comes across as a labor of love from the Google team. Replete with heel clicking, cane waving and bottom kicking, this 2 minute black and white film is the perfect tribute to Chaplin.

It also marks the beginning of an era of ambitious Doodles that aren’t afraid to request the audience’s attention for longer than just a few seconds. As such, the Chaplin Doodle is an essential link between the stylized Google logos that were prevalent up to 2011 and the sprawling experiences that would come thereafter.

The 10 best Google Doodles of all time

5. My Afrocentric Life (Mar 21, 2016)

Since 2009, Google has been running its Doodle 4 Google competition. The competition encourages elementary school kids (initially in the US, but this has now expanded internationally) to design a Doodle based on the people and issues that matter most to them.

Akilah Johnson was the US winner in 2016 with her entry, ‘My Afrocentric Life’, inspired by the Black Lives Matter movement. Chosen from over 100,000 student submissions, Johnson created the Doodle over the course of two weeks using pencils, crayons and markers.

The 10 best Google Doodles of all time

This initiative is a great way for Google to communicate with a younger generation, and it also shows the company’s willingness to give voice to political messages.

6. Ludwig van Beethoven (Dec 17, 2015)

The greatest composer of all time was given the fitting honor of Google’s most engrossing, intricate, classical music Doodle.

Created to celebrate the 245th anniversary of Beethoven’s baptism (his exact birthdate is unknown), this interactive game showcases events in the great artist’s life (both highs and lows), and invites us to piece together movements from his most famous works.

The 10 best Google Doodles of all time

This Doodle makes the list for various reasons. It develops a sustained narrative and invites the viewer to interact. It also features some of the greatest art in European history.

But primarily, it takes what is sometimes seen as a difficult or impenetrable form of art and makes it accessible. This is an example of Google at its enlightening, playful best.

An honorable mention should also go to the Debussy Doodle in this category.

7. St Patrick’s Day (Mar 17, 2015)

Google has an illustrious history of producing Doodles to coincide with national holidays. Everywhere from America to Algeria to Australia has been given the Doodle treatment.

However, for sheer fun, the St Patrick’s Day iterations are hard to beat. 2015 was a vintage year, featuring a family of fiddle-playing clovers designed by Irish artist Eamon O’Neill.

The 10 best Google Doodles of all time

What makes these Doodles special is Google’s commitment to celebrating such a wide range of holidays worldwide every year. For their brave use of color, the Holi festival animations are particularly worth a look.

8. International Women’s Day (Mar 8, 2017)

Google has been honoring International Women’s Day on its homepage for many years, but in 2017 it went the extra mile to provide a comprehensive look at 13 pioneers that have shaped our everyday lives.

The 10 best Google Doodles of all time

What makes this most interesting is Google’s desire to go beyond the names we all already know, to give light to some unseen or hidden stories.

The slideshow gives prominence to Egypt’s first female pilot and Korea’s first female lawyer, for example. Moreover, it encourages us to do our own research to learn more about each person, instead of simply spoonfeeding us a few quick facts before we move on.

9. PAC-MAN (May 21, 2010)

The Pac-Man Doodle was a phenomenal success. It deserves an article of its own, really.

Said to have cost the economy $120 million in lost labor time, it tapped into our nostalgia for one of the most popular video games of all time.

Created for PAC-MAN’s 30th anniversary, the first-ever playable Doodle replicates the experience of the old arcade game.

It was initially launched for a two-day period, as Google expected it to surpass the popularity of your everyday Doodle. The fervent response was a little more than they had anticipated, however.

The 10 best Google Doodles of all time

Luckily, you can still play the game here.

Also worthy of mention are the immensely popular Les Paul Doodle, which now has its own standalone page, and the Doodle Fruit Games, created for the 2016 Olympics.

10. Oskar Fishinger (Jun 22, 2017)

The most recent entry on our list – and perhaps the most expansive in its ambitions – was created to mark the birthday of filmmaker and visual artist Oskar Fishinger.  He was fascinated by the links between music and vision, which he saw as inextricable.

The 10 best Google Doodles of all time

Google’s interactive take on this is an immersive experience, opening with a quote from the artist before offering us the opportunity to create our own ‘visual music’ using a range of instruments.

The 10 best Google Doodles of all time

The Fishinger Doodle is arresting, both visually and sonically. The perfect celebration of Fishinger’s work, in other words.

It is an enticing glimpse of the pleasant surprises we can all expect as we log onto Google every morning, as its Doodles grow evermore sophisticated, charming, and instructive.


5 Reasons Clients Fire Their SEO Agency (And How to Easily Avoid Them) by @clarkboyd

The relationship between SEO agencies and clients can be fragile. Here's why things break down and how to avoid them.

The post 5 Reasons Clients Fire Their SEO Agency (And How to Easily Avoid Them) by @clarkboyd appeared first on Search Engine Journal.


Everything you need to know about visual search (so far)

Visual search is one of the most complex and fiercely competed sectors of our industry. Earlier this month, Bing announced their new visual search mode, hot on the heels of similar developments from Pinterest and Google.

Ours is a culture mediated by images, so it stands to reason that visual search has assumed such importance for the world’s largest technology companies. The pace of progress is certainly quickening; but there is no clear visual search ‘winner’ and nor will there be one soon.

The search industry has developed significantly over the past decade, through advances in personalization, natural language processing, and multimedia results. And yet, one could argue that the power of the image remains untapped.

This is not due to a lack of attention or investment. Quite the contrary, in fact. Cracking visual search will require a combination of technological nous, psychological insight, and neuroscientific know-how. This makes it a fascinating area of development, but also one that will not be mastered easily.

Therefore, in this article, we will begin with an outline of the visual search industry and the challenges it poses, before analyzing the recent progress made by Google, Microsoft and Pinterest.

What is visual search?

We all partake in visual search every day. Every time we need to locate our keys among a range of other items, for example, our brains are engaged in a visual search.

We learn to recognize certain targets and we can locate them within a busy landscape with increasing ease over time.

This is a trickier task for a computer, however.

Image search, in which a search engine takes a text-based query and tries to find the best visual match, is subtly distinct from modern visual search. Visual search can take an image as its ‘query’, rather than text. In order to perform an accurate visual search, search engines require much more sophisticated processes than they do for traditional image search.

Typically, as part of this process, deep neural networks are put through their paces in tests like the one below, with the hope that they will mimic the functioning of the human brain in identifying targets:

The decisions (or inherent ‘biases’, as they are known) that allow us to make sense of these patterns are more difficult to integrate into a machine. When processing an image, should a machine prioritize shape, color, or size? How does a person do this? Do we even know for sure, or do we only know the output?

As such, search engines still struggle to process images in the way we expect them to. We simply don’t understand our own biases well enough to be able to reproduce them in another system.

There has been a lot of progress in this field, nonetheless. Google image search has improved drastically in response to text queries and other options, like Tineye, also allow us to use reverse image search. This is a useful feature, but its limits are self-evident.

For years, Facebook has been able to identify individuals in photos, in the same way a person would immediately recognize a friend’s face. This example is a closer approximation of the holy grail for visual search; however, it still falls short. In this instance, Facebook has set up its networks to search for faces, giving them a clear target.

At its zenith, online visual search allows us to use an image as an input and receive another, related image as an output. This would mean that we could take a picture with a smartphone of a chair, for example, and have the technology return pictures of suitable rugs to accompany the style of the chair.

The typically ‘human’ process in the middle, where we would decipher the component parts of an image and decide what it is about, then conceptualize and categorize related items, is undertaken by deep neural networks. These networks are ‘unsupervised’, meaning that there is no human intervention as they alter their functioning based on feedback signals and work to deliver the desired output.

The result can be mesmerising, as in the below interpretations of an image of Georges Seurat’s ‘A Sunday Afternoon on the Island of La Grand Jatte’ by Google’s neural networks:

Everything you need to know about visual search (so far)

This is just one approach to answering a delicate question, however.

There are no right or wrong answers in this field as it stands; simply more or less effective ones in a given context.

We should therefore assess the progress of a few technology giants to observe the significant strides they have made thus far, but also the obstacles left to overcome before visual search is truly mastered.

Bing visual search

In early June at TechCrunch 50, Microsoft announced that it would now allow users to “search by picture.”

This is notable for a number of reasons. First of all, although Bing image search has been present for quite some time, Microsoft actually removed its original visual search product in 2012. People simply weren’t using it since its 2009 launch, as it wasn’t accurate enough.

Furthermore, it would be fair to say that Microsoft is running a little behind in this race. Rival search engines and social media platforms have provided visual search functions for some time now.

As a result, it seems reasonable to surmise that Microsoft must have something compelling if they have chosen to re-enter the fray with such a public announcement. While it is not quite revolutionary, the new Bing visual search is still a useful tool that builds significantly on their image search product.

Everything you need to know about visual search (so far)

A Bing search for “kitchen decor ideas” which showcases Bing’s new visual search capabilities

What sets Bing visual search apart is the ability to search within images and then expand this out to related objects that might complement the user’s selection.

Everything you need to know about visual search (so far)

 A user can select specific objects, hone in on them, and purchase similar items if they desire. The opportunities for retailers are both obvious and plentiful.

It’s worth mentioning that Pinterest’s visual search has been able to do this for some time. But the important difference between Pinterest’s capability and Bing’s in this regard is that Pinterest can only redirect users to Pins that businesses have made available on Pinterest – and not all of them might be shoppable. Bing, on the other hand, can index a retailer’s website and use visual search to direct the user to it, with no extra effort required on the part of either party.

Powered by Silverlight technology, this should lead to a much more refined approach to searching through images. Microsoft provided the following visualisation of how their query processing system works for this product:

Everything you need to know about visual search (so far)

Microsoft combines this system with the structured data it owns to provide a much richer, more informative search experience. Although restricted to a few search categories, such as homeware, travel, and sports, we should expect to see this rolled out to more areas through this year.

The next step will be to automate parts of this process, so that the user no longer needs to draw a box to select objects. It is still some distance from delivering on the promise of perfect, visual search, but these updates should at least see Microsoft eke out a few more sellable searches via Bing.

Google Lens

Google recently announced its Lens product at the 2017 I/O conference in May. The aim of Lens is really to turn your smartphone into a visual search engine.

Everything you need to know about visual search (so far)

Take a picture of anything out there and Google will tell you what the object is about, along with any related entities. Point your smartphone at a restaurant, for example, and Google will tell you its name, whether your friends have visited it before, and highlight reviews for the restaurant too.

This is supplemented by Google’s envious inventory of data, both from its own knowledge graph and the consumer data it holds.

All of this data can fuel and refine Google’s deep neural networks, which are central to the effective functioning of its Lens product.

Google-owned company DeepMind is at the forefront of visual search innovation. As such, DeepMind is also particularly familiar with just how challenging this technology is to master.

The challenge is no longer necessarily in just creating neural networks that can understand an image as effectively as a human. The bigger challenge (known as the ‘black box problem’ in this field) is that the processes involved in arriving at conclusions are so complex, obscured, and multi-faceted that even Google’s engineers struggle to keep track.

This points to a rather poignant paradox at the heart of visual search and, more broadly, the use of deep neural networks. The aim is to mimic the functioning of the human brain; however, we still don’t really understand how the human brain works.

As a result, DeepMind have started to explore new methods. In a fascinating blog post they summarized the findings from a recent paper, within which they applied the inductive reasoning evident in human perception of images.

Drawing on the rich history of cognitive psychology (rich, at least, in comparison with the nascent field of neural networks), scientists were able to apply within their technology the same biases we apply as people when we classify items.

DeepMind use the following prompt to illuminate their thinking:

“A field linguist has gone to visit a culture whose language is entirely different from our own. The linguist is trying to learn some words from a helpful native speaker, when a rabbit scurries by. The native speaker declares “gavagai”, and the linguist is left to infer the meaning of this new word. The linguist is faced with an abundance of possible inferences, including that “gavagai” refers to rabbits, animals, white things, that specific rabbit, or “undetached parts of rabbits”. There is an infinity of possible inferences to be made. How are people able to choose the correct one?”

Experiments in cognitive psychology have shown that we have a ‘shape bias’; that is to say, we give prominence to the fact that this is a rabbit, rather than focusing on its color or its broader classification as an animal. We are aware of all of these factors, but we choose shape as the most important criterion.

Everything you need to know about visual search (so far)

“Gavagai” Credit: Misha Shiyanov/Shutterstock

DeepMind is one of the most essential components of Google’s development into an ‘AI-first’ company, so we can expect findings like the above to be incorporated into visual search in the near future. When they do, we shouldn’t rule out the launch of Google Glass 2.0 or something similar.

Pinterest Lens

Pinterest aims to establish itself as the go-to search engine when you don’t have the words to describe what you are looking for.

The launch of its Lens product in March this year was a real statement of intent and Pinterest has made a number of senior hires from Google’s image search teams to fuel development.

In combination with its establishment of a paid search product and features like ‘Shop the Look’, there is a growing consensus that Pinterest could become a real marketing contender. Along with Amazon, it should benefit from advertisers’ thirst for more options beyond Google and Facebook.

Pinterest president Tim Kendall noted recently at TechCrunch Disrupt: “We’re starting to be able to segue into differentiation and build things that other people can’t. Or they could build it, but because of the nature of the products, this would make less sense.”

This drives at the heart of the matter. Pinterest users come to the site for something different, which allows Pinterest to build different products for them. While Google fights war on numerous fronts, Pinterest can focus on improving its visual search offering.

Admittedly, it remains a work in progress, but Pinterest Lens is the most advanced visual search tool available at the moment. Using a smartphone, a Pinner (as the site’s users are known) can take a picture within the app and have it processed with a high degree of accuracy by Pinterest’s technology.

The results are quite effective for items of clothing and homeware, although there is still a long way to go before we use Pinterest as our personal stylist. As a tantalising glimpse of the future, however, Pinterest Lens is a welcome and impressive development.

Everything you need to know about visual search (so far)

The next step is to monetize this, which is exactly what Pinterest plans to do. Visual search will become part of its paid advertising package, a fact that will no doubt appeal to retailers keen to move beyond keyword targeting and social media prospecting.

We may still be years from declaring a winner in the battle for visual search supremacy, but it is clear to see that the victor will claim significant spoils.


Google fined $2.7 billion by E.U. in anti-trust ruling

Google has been fined a record $2.7 billion for a breach of E.U. anti-trust rules.

The search giant was charged with giving “illegal advantages” to another Google product within search results in a case that started more than seven years ago. The case relates specifically to Google Shopping, Google’s increasingly profitable shopping comparison engine.

This fine dwarfs the previous record fine for the abuse of a monopoly, doled out to Intel in 2009.

The E.U. commission arrived at the figure by taking a percentage of Google’s revenue from its Shopping product across the 13 European countries in question since 2008.

Should Google fail to comply with the terms set by the E.U. within 90 days, they will be fined 5 percent of the daily turnover of parent company, Alphabet.

“What Google has done is illegal under EU antitrust rules. It denied other companies the chance to compete on their merits and to innovate. And most importantly, it denied European consumers a genuine choice of services and the full benefits of innovation,” stated Margrethe Vestager, the E.U. competition commissioner.

The wider implications of this ruling

The bigger questions now surround the precedent that this sets. There is a general consensus that the industry requires independent regulation, but that will be a lot trickier than it seems. Google would be loathe to reveal its closely guarded algorithms.

Moreover, we are moving into an era where they may start to lose full transparency over the inner workings of their products.

With Google – and all of its main competitors – moving their focus towards unsupervised machine learning algorithms, how exactly will they comply with these regulations? It may become impossible to prove the non-existence of bias in such a complex system in constant flux.

The likes of Facebook and Amazon will surely see this as the E.U. making an example of Google. However, they may have cause for concern too.

Google’s position as a search engine sets it apart, as consumers trust that the results have been ranked based on their quality. A 2014 study in India showed the persuasive power that Google holds, and this is one it is adjudged to have abused to the detriment of European consumers.

Facebook and, in particular, Amazon, strive to dominate the e-commerce advertising market. Any potential abuses of their increasingly strong positions will be watched very closely, by both the E.U. and Google.

Although companies like Amazon operate on different business models to Google, they are still moving towards a ‘machine learning first’ approach and will want to solidify their dominant position as the number one online shopping destination.

With the E.U. taking such a firm stance now, it seems unlikely they will relent and accept that their algorithms are making unbiased decisions.

What happens next?

Google has the right to appeal, which could extend the case by another 5 to 10 years. Intel, for example, is still fighting its fine from 2009 in European courts. However, even if Google should choose to appeal, it will still need to provide proof that it has changed its business practices in line with the court’s ruling within 90 days.

Google remains under investigation by the E.U. for giving similar advantages to two other Alphabet products, Android and AdSense.


For more Google vs. the EU, check out our previous news story: When is a search engine not a search engine? When it’s Google, says the EU


Why (And How) to Buy Twitter Followers by @clarkboyd

This guide will explore the pros and cons of why and how to buy both fake and real Twitter followers.

The post Why (And How) to Buy Twitter Followers by @clarkboyd appeared first on Search Engine Journal.


10 Pinterest SEO Tips That Will Set You up for Success by @clarkboyd

These 10 Pinterest SEO tips will help you increase engagement, traffic, and search visibility.

The post 10 Pinterest SEO Tips That Will Set You up for Success by @clarkboyd appeared first on Search Engine Journal.