EMPOWERING ORDINARY MARKETERS
TO BUILD EXTRAORDINARY LINKS.

Introducing Scrape Rate – A New Link Metric

by Jon Cooper
142 Flares 142 Flares ×

Drop what you’re doing and go the Boing Boing. Go to the archives page and select three different posts that were published at least a month ago. Go to Google and type in intitle:”post title (obviously replace post title with the actual post title. Keep the quotes). Do this for each post, add the number of results together, divide by 3, and you have now calculated the scrape rate for Boing Boing.

After doing a quick check, I calculated Boing Boing’s scrape rate as 40.33. Your number will vary based on which 3 you check, but this number should give you an overall feel on how often Boing Boing’s content is scraped. 

 

Scrape rate is a guest blogger’s best friend. For those who guest blog, the links you get in the post only go so far. Having the content scraped, although no longer on original content, gives you more link equity.

Imagine if you wrote a guest post on an average blog that was scraped 100 different times. Now compare that link power to a guest post written on a more authoritative blog that only gets scraped once or twice. While the original content on the second option yields more quality & trust, you can’t beat the quantity of links the first option provides. Argue all you want, but in terms of link building, having your content scraped (as long as the links are intact) like that trumps the quality of the original source in most cases.

Here’s a good real life example. Go to OSE and paste in the URL of a recent blog post from the SEOmoz blog. A lot of the posts don’t get an overwhelming number of high quality links, save a few successful posts, so the majority of the link power is coming from the content being scraped. The result? Most of the posts have a page authority of 60 or greater.

Note: I know a lot of that authority comes from the site it’s hosted on & the internal linking, but the point I’m making is when all else equal, scraped links can provide authority when found in great numbers.

When I guest posted on the SEOmoz blog in October, I had a targeted anchor text link back to a page I was trying to rank for a certain keyword. After the post was live for a few days, I saw no change in the SERPs, but after a week or two, my content was scraped by roughly 30  sites, and I saw an immediate jump in the SERPs. I went from not even in the top 50 for that keyword to the second page.

Why is scrape rate important?

If you’re guest blogging on a regular basis, you need to make sure you do your research. Guest blogging is more than taking an hour of your time to write up a post and throwing it at any blogger that’s willing to publish it; it’s about finding what resonates with the audience, interacting with the readers (i.e. via comments), and getting the most bang for your buck in terms of links. That’s where scrape rate comes in.

I’m not sold on sorting guest blogging prospects solely on domain authority and pagerank. Take it a step further. Go to Google and calculate the scrape rate (if someone creates a tool that does this automatically, let me know; I’ll happily send a few links your way). The best part about the metric is that some previously looked over blogs that don’t get pitched as much for guest posts might actually be the one who provides the most link power.

While everyone else in your niche is struggling to put together a post that gets published on blog X, you’re putting together a post for blog Y that you know has its content scraped and, in the end, passes more link juice.

The idea isn’t perfect, because a lot of blogs don’t get scraped at all, but this metric can help you find identify, as I said, a few picked-over blogs that others have missed.

I’m not dedicated to the idea of finding 3 average month-old posts, counting the number of times scraped, and dividing by three, but I think it’s fairly accurate. Here’s why:

  • If you just calculated it by looking at one post, it could skew the results. That’s just basic data knowledge.
  • Having it a month old means that it was given enough time for it to be scraped. That’s the problem with just finding the most recent post on the blog: some sites might syndicate it, but it will take a week or so after it’s published for it to happen.
  • Using the intitle search yields pinpoint accuracy, but only if the title is unique. One problem I’ve run into when doing this is that some results are from Friendfeed, Tweetmeme, and other similar social sites; I count them because it’s not worth the hassle of individually counting them out.

Granted this is a brand new idea, I want to hear your thoughts on this metric. I think it’s got potential to catch on in the SEO community, but I’m biased, because I’m the one who came up with it. Please leave me a comment; if you think it’s a bad idea, and you see flaws, feel free to trash me. I can take it. At the same time, if you like the idea, I’d love the words of encouragement.

 

Thanks for reading! Make sure you follow me on twitter and grab my RSS.

This post was written by...

Jon Cooper – who has written 119 posts on Point Blank SEO.

Jon Cooper+ is an SEO consultant based out of Gainesville, FL who specializes in link building. For more information on him and Point Blank SEO, visit the about page. Follow him on Twitter.

NEED LINKS?
Relax - I send out free emails full of
cutting edge link building tips.
27 Comments
  1. So how does this explain for example article marketing by the same means then this should also boost your rankings but doesn’t. Or am I missing something?

  2. Ian says:

    This works. Period.

    Though – it’s not really any different then just loading up something like article marketing robot and blasting out 200 article copies with your links in it. (Save the authority from the original post) The value of those sites and the value of scraper sites are essentially the same.

    Also – the people saying links from scraped sites and spun content links are now dead and “we aren’t in 2007″ – you’re hilarious. Please keep telling people this doesn’t work anymore.

  3. kevin gallagher says:

    article marketing and spun article isn’t the best way to do seo though is it? So you can keep that tactic thanks.

    • Jon Cooper says:

      I think we can agree to disagree, but I’ve personally seen results from this. What makes this different from article marketing is that the sites scraping it are usually both relevant and contain less duplicate content. For example, an article directory is usually marked as total crap by Google, because all of its content is usually duplicate or spun. But a stand alone site that publishes its own unique content and an-every-once-in-a-while scraped piece of content usually holds more value (sorry for the unforgivable amount of dashes).

  4. HI Jon,

    Yeah that would make sense to me. I wasn’t dissing your theory ( although it does warrant more research) I was just trying to understand the relationship between that and article marketing. As you don’t need to be aaron wall to work out that Article marketing is as dead as disco.

    Thanks for sharing your thoughts on this topic.

  5. Ana Hoffman says:

    Definitely brilliant, Jon.

    Now my brain is churning trying to figure out how to improve the scrape rate for my blog! lol

  6. Lovett says:

    Hi Jon,

    Are you suggesting that links from duplicate content pass value and help rankings? That sounds like SEO back in 2008 when article marketing was thriving. The 1-2 week delay in the rankings may not relate to the links you got from the scrapers at all. This is just speculation but if that really works, Google sucks!

    • Jon Cooper says:

      Thanks for the comment Lovett! I’m saying that they pass minimum value, but when found in great numbers, they have the ability to pass some value. Why? Because if your content is being scraped by 80 different sites just days after it was published, it’s telling Google that it’s content worth scraping.

  7. Ted says:

    Hey Jon,

    Here is how I look at it. I suppose that anytime you are given the opportunity to publish somewhere that you know your post is going to earn a couple dozen backlinks instantaneously (whether from scraped content or not) then you should definitely take advantage of it. I just question how many opportunities (on different domains) you are going to get to do that.

    Your idea of inventing a tool to measure the metric is a creative idea……I tip my hat to you. I just think that if someone is actually going to take the time to analyze whether or not they are going to submit a guest blog based on that particular metric then that person is guilty of analysis paralysis.

    Why bother measuring it at all? If you have the opportunity to publish a guest blog post at a decent site that shares your same target audience, why in the hell would you pass it up? You would be crazy to pass it up even if the scrape rate was zero.

    And, if you are guest blogging primarily for links, there are easy ways you can juice up a less juicy post on someone else’s website without crossing any black hat lines.

142 Flares Twitter 81 Facebook 0 Google+ 37 LinkedIn 24 142 Flares ×