Google through Matt Cutts said that they are not recommending site owners or webmasters to block duplicate contents that exist on their sites. He is referring to blocking the said dupe content through the use of DISALLOW option of ROBOTS.TXT and the NOFOLLOW, NOINDEX meta tag. Instead Matt said that all the contents should be allowed to be crawled by the GoogleBot (Google’s spider) and let their system determine on which version of page should be indexed or included on their database and result pages.

Here’s the video containing the detailed answer by Matt Cutts regarding the duplicate content issue:

For me, instead of just letting GoogleBot do the decision making for what page version to be shown on SERPs (search engine result pages), I will just use canonical tags which I think can be programatically generated and embed on pages. In this way, you’ll have the control on which of the page will be the landing page that you want to be found by surfers and searchers on the web.

Read my article about Canonical tag for details about how to implement this.

If you are using the latest version of WordPress for your site, it generates the canonical link / tag so you don’t have any problem with it already but you just need to do some minor tweaks and install some SEO plug-ins.

0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments