Thursday, March 14, 2019

google search

google search

Where Google got more inventory to show Responsive Search Ads ads may surprise you - Search Engine Land

Posted: 14 Mar 2019 09:14 AM PDT

Advertisers know that to be effective with Google search ads they must be able to stand out in a competitive environment, and that often means having ads that qualify to be shown in the coveted positions above the search results. And since PPC is hardly a secret anymore, there are always more advertisers willing to compete for those few highly desirable ad slots. So when we hear that we can show our ads on new inventory when using a new Google Ads feature, like Responsive Search Ads (RSAs), that gets our attention. So where exactly is Google getting this new inventory that they say Responsive Search Ads (RSA) ads may qualify for? And what is the right way to evaluate the performance of this new inventory? Let's take a look.

Ad tests need to limit the variables

When we do A/B ad tests through our tools in Optmyzr (my company), we firmly believe that we should compare apples to apples. In a perfect world, we would be able to replicate the exact same conditions for a search multiple times and show a different ad variant every time to get data about which ad drives the best results, whether that result is the best CTR, conversion rate, or conversions per impression (a combination of the previous 2 metrics).

But we don't live in a perfect world so we have to sacrifice a bit of precision and try to limit the variables of our tests as much as possible. In Google Ads, that is really, really difficult because there are a lot of factors that change and that we can't always control for. In some cases, the best we can do is to compare similar ad formats (e.g., expanded text ads) within one ad group where it is targeting the same audience. While that may sound like an apples-to-apples comparison, it's often not because the ad group has different keywords, that match to even more different queries, and the ads are shown to entirely different people.

In some cases, we will compare different ad formats against one another but that only makes sense if those ads were competing for the same ad slot. For example, it's a fair test when an ad slot on the display network could have accepted either a display ad or a text ad. But it's not a fair test when some of that inventory could only have shown text ads and others only display ads. That's an extra variable that muddles the results.

RSAs should be evaluated on incrementality

With RSAs, a simple comparison of the performance of the RSA to the ETAs in the same ad group is a flawed test because Google says the RSAs have access to inventory where the ETA could not have appeared. Google's Matt Lawson says "There are all sorts of instances where you might end up serving impressions in a low CTR placement that you would never have qualified for before." He's not talking purely about Google showing ads in places that didn't show ads before the arrival of RSAs. He's also talking about showing ads on searches where your static expanded text ads (ETAs) may previously have had too low an Ad Rank to garner an impression. I've spoken with other Googlers to confirm that at least some of this new inventory is not new at all. It's inventory that's always been there, but where we may not have had good enough ads to make it available to us.

So given that the inventory is sometimes different, what we need to measure is whether the incremental volume driven by the RSA justifies keeping those ads active. Andy Taylor makes the point that only Google can measure the incrementality of new ad formats like RSAs in his post describing why click-through and conversion rates matter less than before. He says this is the case "since there's no way for advertisers to quantify ad placements that are off limits to ETAs for whatever reason." But if we can define some of these new placements, I would argue that we can measure the performance.

How to find the new inventory RSAs gave access to

The new placements are search terms for which you previously didn't qualify. Which ones exactly? Run a search terms report from right before and right after you enabled RSAs in an ad group. Finding the date when you first started RSAs is easy enough with the Change History.

Use the Change History in Google Ads to find the date when you first added responsive search ads to an ad group. Screenshot from Google.

Then see if there are entirely new queries for which there were no impressions before and some impressions after. While there certainly can be other reasons for these queries to have started showing your ads, like bid changes, algorithm changes by Google and changes in the competitive landscape, the addition of an RSA ad unit is also a plausible reason for the new inventory opening up to you.

Use the date range comparison on the Search Terms tab to find queries that only started showing impressions after responsive search ads were added to an ad group. Screenshot from Google.

Finding new inventory long after RSAs started

RSAs use machine learning to match the best ad variation with each search. As the system learns, it may turn off inventory that performs poorly and turns on inventory that appears promising. So it's worthwhile to keep an eye on what this new inventory is on an ongoing basis. You can get this data by looking at ad performance segmented by query. Surprised at this recommendation because you've never seen a search term segment on the ads performance page in the Google Ads front-end (AWFE)? You're right, that data is not there, but if you look beyond what's in the Google Ads interface and delve into scripts and Ads API reports, it's available.

As I've covered in previous posts, there are over 40 Google Ads reports available through the API and they're chock full of details you simply won't see in AWFE (AdWords Front End).

First, download an Ad Performance Report and include the Id and the AdType.

The Ad Performance Report can be downloaded using scripts or the API. Screenshot from Google.

Then download the Search Query Performance Report and include the Query, CreativeId, and all the metrics you want to check, like Impressions, Clicks, Conversions, etc.

The Search Query Performance report contains a reference to the CreativeId which can be used to connect search term performance with the ad that was shown for a particular search term. Screenshot from Google.

Finally, use spreadsheets to do a vlookup between the two sets of data so that next to each unique combination of a query for an ad, you see if the ad was an ETA or RSA.

Use the vlookup function in spreadsheets to connect the ad data with the search term data. Screenshot from Google.

Then sort the resulting sheet by query and you'll start to see when a particular query has shown with both RSA and ETA ads. You'll also see queries that have shown ONLY for RSA ads.

Rows highlighted in blue show search terms that only showed with RSAs, new inventory that should be judged on its incremental nature. Rows in green showed ads with multiple ad formats and can be compared in a more traditional A/B test. Screenshot from Google.

With these two techniques, you'll start to get a sense of the incrementality Google is referring to.

Why didn't my old ads qualify for these search terms?

So why do RSAs open up new search term inventory in the first place? Remember Google Ads is an auction where the order of the ads is determined by Ad Rank which is a function of the CPC bid, Quality Score, and other factors that impact CTR like the use of additional features like ad extensions.

Every time a search happens, a new auction is run. While an advertiser's CPC bid may not change from one auction to the next, the QS can change dramatically based on Google's machine learning Quality Score system whose job it is to predict the likelihood of an ad getting a click (driving CTR). That likelihood is significantly impacted by how the query relates to the ad. And when the QS system is limited to a handful of static ETA ads, it may not be able to pick an one that is good enough to have the QS necessary to get the ad to show on the page. But when the ad group contains an RSA, the system can try to find a combination of components that will have a high QS for that particular search. And when it succeeds at that, the ad is all of a sudden eligible to participate in more auctions, hence getting access to new inventory. So it's not so much that Google has unlocked some new inventory that previously didn't exist. Instead, machine learning has helped figure out a way to create an ad that is relevant enough to access inventory that's always been there.

Machine learning needs a teacher; it needs you!

Now some advertisers say that RSAs don't perform as well as ETAs. As Andy, Matt, and I have already pointed out, that may be a finding based on incomplete information because it may ignore the fact that the different formats trigger ads for entirely different queries. But what if you've accounted for that and they do perform worse? That sounds like an optimization opportunity rather than a reason to pause RSAs.

Help the machine learn to do better and don't just turn off the feature. That may not happen though because humans are funny about how they treat technology. Automotive researchers found that people tend to be quite forgiving of mistakes made by other humans who are learning something. If your 16-year old drives poorly, you give advice and trust that they will learn from it and get better with experience. When a self-driving car, on the other hand, makes the same mistake as your 16-year old, humans tend to chalk it up to bad software and turn off the self-driving feature and instead continue to drive manually.

Many of the exciting automation features we're seeing in PPC these days is driven by machine learning and as the name implies, there is some learning that needs to happen before results may get good enough. Learning is part of the name of the technology driving this. How quickly a learning system can become good depends on having a good teacher. And even with the best teacher, an algorithm needs time to first train itself and later update what it's learned as it starts to see the real-life results of its decisions.

So with RSAs, they can only be as good as the ad components we've provided. Google has guidelines for what constitute good components and I've provided a few scripts to mine for good ad components and n-grams in ads in your historical ads data. Once the ad unit has great text to experiment with, give it some time to do the experiments.

Just like broad match will eventually stop showing ads for related queries that fail to turn impressions into clicks, so will RSAs. Google makes no money from impressions; they make money from clicks. And Google is pretty strict about not showing things that seem irrelevant, i.e., RSA variations that never get clicked because these are a waste of space and a detriment to Google's position as the provider of useful answers.


It's easy to get carried away with anecdotes heard from other advertisers and decide that maybe RSAs don't work all that well. I hope that the more you understand how to properly measure them, and the technology that will improve performance, the better you will be in a position to make an informed decision about where RSAs fit into your strategy for managing successful PPC accounts in a world that is ever more driven by automation. And if you'd like to learn more about the role of humans in digital marketing in an AI world, check out my first book.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

About The Author

Frederick ("Fred") Vallaeys was one of the first 500 employees at Google where he spent 10 years building AdWords and teaching advertisers how to get the most out of it as the Google AdWords Evangelist. Today he is the Cofounder of Optmyzr, an AdWords tool company focused on unique data insights, One-Click Optimizations™, advanced reporting to make account management more efficient, and Enhanced Scripts™ for AdWords. He stays up-to-speed with best practices through his work with SalesX, a search marketing agency focused on turning clicks into revenue. He is a frequent guest speaker at events where he inspires organizations to be more innovative and become better online marketers.

Google Florida Update 2: What’s Changed? Early Insights & Reaction - Search Engine Journal

Posted: 14 Mar 2019 11:48 AM PDT

Reaction and feedback to Google's Update Florida 2 is generally more upbeat than negative. A common theme among many in the SEO community is that Update Florida 2 is behaving like a rollback of previous updates.

Interestingly, some publishers noted connections to links and their rankings. In the week leading up to this update I had noticed numerous reports of aggressive spidering by GoogleBot. I don't know if there's a connection between aggressive spidering, links and the update. But it was interesting to see that in the lead up to the update.

Brett Tabke On Update Florida 2

Brett Tabke is the founder of WebmasterWorld marketing forum and of PubCon Search, Social Media, Marketing Conferences. Brett is someone I regard as one of the founders of modern SEO. Many SEO practices we take for granted originated through ideas Brett suggested on WebmasterWorld. The acronym SERPs was invented on WebmasterWorld.

It was Brett who had received advance notice from Google about this update during the recent Pubcon Florida conference. I asked Brett what he thought about this update so far:

"Well, as you can tell from the Google forum, we are a little too early too tell.

Originally guidance from Google was this was going to be one of the largest updates we've seen in "a very long time" (their words). So far, it doesn't appear that is the case.

In fact, I think we may be seeing a rollback of a few of the last updates."

Brett's insight about the update seeming to appear like a rollback is a well-stated observation. A common theme among many publishers is how their sites are bouncing back from previous algorithms.

SEMrush on Florida Update 2

I reached out to SEMrush to see if they could share any data or early findings about Florida 2. Here is their response:

Right away, I can tell you the basic things that we see in Sensor about Florida 2:

1. Overall volatility levels are not significantly higher than run-of-the-mill unannounced Google Updates
2. Update patterns are almost the same on Desktop vs Mobile
3. All countries that we track are affected, but Germany and especially France and Italy seem to lag a day behind
4. Most affected categories so far are Autos&Vehicles, Health and Pets&Animals

WebmasterWorld Discusses Update Florida 2

In a discussion on WebmasterWorld, members were reporting more gains than losses. Brett Tabke, founder of WebmasterWorld and PubCon SEO conference posted:

"Traffic on several sites that were abused by Penguin have rebounded significantly. …I'd given them up for dead. Then noticed affiliate earnings on them this morning for the first time in a year. Traffic was way up. (400 uniques a day to 2500)"

Another WebmasterWorld member immediately responded with a similar anecdote of a formerly penalized site that has also returned to ranking again.

Screenshot of a WebmasterWorld postA forum member commented on how their formerly penalized website has returned to the search results.

Then further down in the discussion, another member stated:

"…there is definitely an anchor text link element to the changes today. I have a huge amount of exact match links thanks to the copy & paste box I've had on my site since about 2002 that used to contain an exact match link. All my well established pages that have exact match links are getting trashed, but the new pages I put up a few months ago that have hardly any links yet are doing fine and even gaining."

Black Hat Reaction

Comments from a private black hat Facebook group note that sites powered by automated link building were trending downward. Several comments made directly to me and in the private Facebook groups noted that sites with heavy anchor text optimization are losing in the SERPs.

An anchor text is the words used to link to another site. Typical anchor text is "click here." Anchor text optimization is creating links with specific words as the anchor with the goal of influencing search engines.

I would caution that correlations between anchor text optimization and ranking changes are unreliable. While it may seem like the anchor text optimization was the cause it could more likely be an issue with the quality of the site where the link was obtained.

The low quality of a link can have a delayed reaction from Google. What typically happens is the link works then it stops working as Google devalues the power of the link.

An over-optimization manual action results in a warning in the Google Search Console. Everything else is usually a devaluation of a link.

In the black hat forums, the reaction was more upbeat than down. More black hat forum members were posting gains then members who were posting losses.

Florida Update 2 Summary

As Brett Tabke observed, it's still early to reach conclusions. But it's helpful to get a snapshot of where the digital publishing community stands today in reaction to Google's update.

Google has in the past revealed details of major changes to their algorithm. If Update Florida 2 is indeed a major change, as was telegraphed to Brett Tabke, then I expect Google at some point in the near future to reveal some general details of what changed.

Images by Shutterstock, Modified by Author
Screenshots by Author, Modified by Author

Google released a broad core search algorithm on March 12 - Search Engine Land

Posted: 13 Mar 2019 07:59 AM PDT

Google on Wednesday confirmed that it released a broad core search ranking algorithm update this week.

"Our guidance about such updates remains as we've covered before," the company said.

Google Search Liaison Danny Sullivan confirmed this update started March 12.

Why it matters. Google does several core ranking updates per year and confirms very few updates throughout the year. Specific to broad core updates, Google has said numerous times that you cannot do anything specific to fix your rankings. Google's previous advice is, "there's no 'fix' for pages that may perform less well other than to remain focused on building great content. Over time, it may be that your content may rise relative to other pages."

If your rankings did change recently, it may have been related to this broad core ranking update and not necessarily related to a technical change you made on your website.

What changed? Right now it is very early and it is hard to guess what has changed. Based on the SEO chatter around this update, prior to Google confirming the update, some are saying this was again targeting the health/medical space. But, Google has said there was no specific target at medical or health sites with that August 1st update.

It is hard to know which types of sites were impacted the most right now. We will continue to monitor the situation and keep you updated on any insights we see related to this update.

Google's previous advice. Google has previously shared this advice around broad core algorithm updates:

"Each day, Google usually releases one or more changes designed to improve our results. Some are focused around specific improvements. Some are broad changes. Last week, we released a broad core algorithm update. We do these routinely several times per year.

As with any update, some sites may note drops or gains. There's nothing wrong with pages that may now perform less well. Instead, it's that changes to our systems are benefiting pages that were previously under-rewarded.

There's no "fix" for pages that may perform less well other than to remain focused on building great content. Over time, it may be that your content may rise relative to other pages."

To see more advice from Google around Google updates, see this Twitter thread.

More resources on algorithm updates

About The Author

Barry Schwartz is Search Engine Land's News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.

Goodbye green? Google testing black “Ad” label in Search - Search Engine Land

Posted: 13 Mar 2019 10:48 AM PDT

Google found testing a black "Ad" label. Screen shot: @TheBigMarketer

It's been a little over two years since Google last played with the "Ad" label it shows next to text ads on Since February 2017, Google has used green text and a green border to delineate text ads from organic listings in the search results. Now, it looks to be testing a simpler, more subtle ad label treatment.

The latest label test. On Wednesday, UK-based marketing consultant Darren Taylor, who runs The Big Marketer, spotted "Ad" labels with bolded black text and no border. The label appears at the top of the ad with the display URL appears next to it.

Others in the EU region have also spotted it.

We reached out for comment. "We're always testing new ways to improve our experience for our users and advertisers, but don't have anything specific to announce right now," a Google spokesperson said.

Why you should care. Presumably, greater differentiation between ads and organic will have a negative impact on click-through rates on ads. Google doesn't share this data, of course, but it has a long history of modifying the way it signifies ads from organic content.

In 2013, the Federal Trade Commission (FTC) called on search engines to more clearly label ads. It suggested prominent shading, a prominent border or both. It was that year that Google moved away from background shading and introduced an "Ad" label with a yellow background.

This latest test does away with the green, and, perhaps more significantly, moves the label up above the body of the ad. Google has placed the label below the ad headline since 2013. We aren't ready to update our visual history of Google's ad labeling just yet, but will be keeping an eye on this.

About The Author

Ginny Marvin is Third Door Media's Editor-in-Chief, managing day-to-day editorial operations across all of our publications. Ginny writes about paid online marketing topics including paid search, paid social, display and retargeting for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, she has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.

What’s going on with In-depth Articles on Google? - Search Engine Land

Posted: 13 Mar 2019 05:42 AM PDT

Last week there were numerous reports that Google has done away with highlighting In-depth Articles in its search results. We asked Google about it.

What Google said. "When relevant, we do and will continue to surface high-quality evergreen content as part of the overall search results," a Google spokesperson told us.

Google said that the coding used in association with that kind of content changed, and that is likely the reason why the tracking tools show a drop in In-depth Articles.

What is In-depth Articles? In-depth Articles launched in 2013 as a way to highlight longer-form content from what appear to be recognized and higher quality sources. At launch, Google showed this content in a section labeled "In-depth articles" in the search results. In 2015, Google removed the label and accompanying thumbnail images from the interface.

What the tracking tools show. The folks at Moz reported that In-depth Articles stopped showing up in the search results completely:

FiveBlocks, another company that tracks these, also confirmed they went away:

Ari Roth from FiveBlocks showed me data from its IMPACT tool that showed the drop off started March 6.

This wouldn't be the first time Google had an issue showing in-depth articles. A couple of years ago they went missing for 17 days.

Did they go away? We don't track in-depth articles carefully but when I search for topics like [Mercury] I do see some examples of detailed, long-form, evergreen content. Some that even show up from our example screenshot back in 2015.

But Roth sent a screenshot of a search for [Amazon] from a week ago. The organic results had been in-depth articles that are no longer showing in Google search results.

Here is that screenshot of the original results (click to enlarge) showing articles from The Verge, Wired and Gizmodo.

The current results instead include links to Amazon's social media pages and own sites.

Why you should care. For publishers, this could of course mean a drop in organic traffic from Google if it is no longer showing your in-depth or evergreen content as often or as prominently as it once did for certain queries.

For others, such as big brands like Amazon that attract press coverage, it could mean more traffic to their own properties and their social networks.

And, depending on the coverage that ranked, it could help some in the brand reputation department.

Take those Amazon headlines that had been ranking, for example: "Dirty dealing in the $175 billion Amazon marketplace," "Why it's hard to escape Amazon's long reach," and "I Tried to Block Amazon From My Life. It Was Impossible."

Or the Rolling Stone article that had been ranking for [Bank of America] headlined "Bank of America: Too Crooked to Fail."

Have you seen and impact or any changes in In-depth Articles? Let us know on Twitter.

About The Author

Barry Schwartz is Search Engine Land's News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.

No comments:

Post a Comment