EMPOWERING ORDINARY MARKETERS
TO BUILD EXTRAORDINARY LINKS.

Introducing Scrape Rate – A New Link Metric

by Jon Cooper
142 Flares 142 Flares ×

Drop what you’re doing and go the Boing Boing. Go to the archives page and select three different posts that were published at least a month ago. Go to Google and type in intitle:”post title (obviously replace post title with the actual post title. Keep the quotes). Do this for each post, add the number of results together, divide by 3, and you have now calculated the scrape rate for Boing Boing.

After doing a quick check, I calculated Boing Boing’s scrape rate as 40.33. Your number will vary based on which 3 you check, but this number should give you an overall feel on how often Boing Boing’s content is scraped. 

 

Scrape rate is a guest blogger’s best friend. For those who guest blog, the links you get in the post only go so far. Having the content scraped, although no longer on original content, gives you more link equity.

Imagine if you wrote a guest post on an average blog that was scraped 100 different times. Now compare that link power to a guest post written on a more authoritative blog that only gets scraped once or twice. While the original content on the second option yields more quality & trust, you can’t beat the quantity of links the first option provides. Argue all you want, but in terms of link building, having your content scraped (as long as the links are intact) like that trumps the quality of the original source in most cases.

Here’s a good real life example. Go to OSE and paste in the URL of a recent blog post from the SEOmoz blog. A lot of the posts don’t get an overwhelming number of high quality links, save a few successful posts, so the majority of the link power is coming from the content being scraped. The result? Most of the posts have a page authority of 60 or greater.

Note: I know a lot of that authority comes from the site it’s hosted on & the internal linking, but the point I’m making is when all else equal, scraped links can provide authority when found in great numbers.

When I guest posted on the SEOmoz blog in October, I had a targeted anchor text link back to a page I was trying to rank for a certain keyword. After the post was live for a few days, I saw no change in the SERPs, but after a week or two, my content was scraped by roughly 30  sites, and I saw an immediate jump in the SERPs. I went from not even in the top 50 for that keyword to the second page.

Why is scrape rate important?

If you’re guest blogging on a regular basis, you need to make sure you do your research. Guest blogging is more than taking an hour of your time to write up a post and throwing it at any blogger that’s willing to publish it; it’s about finding what resonates with the audience, interacting with the readers (i.e. via comments), and getting the most bang for your buck in terms of links. That’s where scrape rate comes in.

I’m not sold on sorting guest blogging prospects solely on domain authority and pagerank. Take it a step further. Go to Google and calculate the scrape rate (if someone creates a tool that does this automatically, let me know; I’ll happily send a few links your way). The best part about the metric is that some previously looked over blogs that don’t get pitched as much for guest posts might actually be the one who provides the most link power.

While everyone else in your niche is struggling to put together a post that gets published on blog X, you’re putting together a post for blog Y that you know has its content scraped and, in the end, passes more link juice.

The idea isn’t perfect, because a lot of blogs don’t get scraped at all, but this metric can help you find identify, as I said, a few picked-over blogs that others have missed.

I’m not dedicated to the idea of finding 3 average month-old posts, counting the number of times scraped, and dividing by three, but I think it’s fairly accurate. Here’s why:

  • If you just calculated it by looking at one post, it could skew the results. That’s just basic data knowledge.
  • Having it a month old means that it was given enough time for it to be scraped. That’s the problem with just finding the most recent post on the blog: some sites might syndicate it, but it will take a week or so after it’s published for it to happen.
  • Using the intitle search yields pinpoint accuracy, but only if the title is unique. One problem I’ve run into when doing this is that some results are from Friendfeed, Tweetmeme, and other similar social sites; I count them because it’s not worth the hassle of individually counting them out.

Granted this is a brand new idea, I want to hear your thoughts on this metric. I think it’s got potential to catch on in the SEO community, but I’m biased, because I’m the one who came up with it. Please leave me a comment; if you think it’s a bad idea, and you see flaws, feel free to trash me. I can take it. At the same time, if you like the idea, I’d love the words of encouragement.

 

Thanks for reading! Make sure you follow me on twitter and grab my RSS.

This post was written by...

Jon Cooper – who has written 121 posts on Point Blank SEO.

Jon Cooper+ is an SEO consultant based out of Gainesville, FL who specializes in link building. For more information on him and Point Blank SEO, visit the about page. Follow him on Twitter.

NEED LINKS?
Relax - I send out free emails full of
cutting edge link building tips.
27 Comments
142 Flares Twitter 81 Facebook 0 Google+ 37 LinkedIn 24 142 Flares ×