Preventing duplicate content issues via robots.txt and .htaccess

by Hamlet Batista | June 07, 2007 | 2 Comments

Rand of posted an interesting article on duplicate content issues. He uses the typical blog to show different examples.

In a blog, every post can appear in the home page, pagination, archives, feeds, etc.

Rand suggests the use of the meta robots tag “no-index”, or the potentially risky use of cloaking, to redirect the robots to the original source.

Joost the Valk recommends WordPress users change some lines in the source code to address these problems.

There are a few items I would like to add to the problem and to the proposed solution.

As willcritchlow asks, there is also the problem of multiple URLs leading to the same content (ie.:,,, etc.). This can be fixed by using HTTP redirects and by telling Google what is our preferred domain via webmaster central.

Reader roadies, recalls reading about a robots.txt and .htaccess solution somewhere. That gave me the inspiration to write this post.

After carefully reviewing Google’s official response to the duplicate content issue, it occurred to me that the problem might not be as bad as we think.

What does Google do about it?
During our crawling and when serving search results, we try hard to index and show pages with distinct information. This filtering means, for instance, that if your site has articles in “regular” and “printer” versions and neither set is blocked in robots.txt or via a noindex meta tag, we’ll choose one version to list. In the rare cases in which we perceive that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we’ll also make appropriate adjustments in the indexing and ranking of the sites involved. However, we prefer to focus on filtering — rather than ranking adjustments … so in the vast majority of cases, the worst thing that’ll befall webmasters is to see the “less desired” version of a page shown in our index.

Basically, Google says that unless we are trying to do something purposely ill intended (like ‘borrowing’ content from other sites), they will only toss out duplicate pages. They explain that their algorithm automatically detects the ‘right’ page and uses that to return results.

The problem is that we might not want Google to choose the ‘right’ page for us. Maybe they are choosing the printer-friendly page and we want them to choose the page that includes our sponsors’ ads! That is one of the main reasons, in my opinion, to address the duplicate content issue. Another thing is that those tossed out pages will likely end up in the infamous supplemental index. Nobody wants them there :-).

One important addition to Rand’s article is the use of robots.txt to address the issue. One advantage, this has over the use of the meta robots tag “no-index”, is in the case of RSS feeds. Web robots index them, they contain duplicate content but the meta tag is intended for HTML/XHTML content and the feeds are XML content.

If you read my post on John Chow’s robots.txt file, you probably noticed that some of the changes he did to his file, were precisely to address duplicate content issues.

Now, let me explain how you can address duplicate content via robots.txt.

One of the nice things about Google’s bot is that it supports pattern matching. This is not part of the robots exclusion standard. Other web bots probably don’t support it.

As I am a little bit lazy, I will use Googlebot for the example as it will require less typing.

User-Agent: Googlebot   #Prevents Google’s robot from accessing paginated pages
Disallow: /page/*  Disallow: /*?* #Some blogs use dynamic URLs for pagination.  
#For example:    
#Prevents Googlebot from accessing the archived posts  
Disallow: /2007/05  Disallow: /2007/06  # It is not a good idea to use * here, like /2007/*,  
# because that will prevent access to the post as well. ie.:/2007/06/06/advanced-link-cloaking-techniques/    
#Prevents Googlebot from accessing the feeds   
Disallow: /feed/

To address print-friendly pages duplication, I think the best solution is to use CSS styles.

Now, let’s see how you can address the problem of the same content accessible from multiple URLs, by using .htaccess and permanent redirects. This assumes you use Apache and mod_alias. More complex manipulation can be achieved via mod_rewrite.

You just need to create a .htaccess file in your website’s root folder with this content:

RedirectPermanent /index.php
Or alternatively:    
Redirect 301 /index.php
 Or, in the event that you plan to use regular expressions, try this:
RedirectMatch 301 /[Ii]ndex.php$  # this matches Index.php and index.php

Google allows you to tell them what is your preferred canonical name ( vs via Webmaster Central, so this step is no longer necessary. At least, if your only concern is Google.

To force all access to your site include www in the URL (ie.: instead of You can use redirection via .htaccess file.

RewriteEngine On   
RewriteBase /   RewriteCond %{HTTP_HOST} !^ [NC]  
# Redirect to   
RewriteRule ^(.*)$1 [L,R=301]  

I said. These additional lines are probably unnecessary, but it doesn’t hurt to do add them.

Update: Reader identity correctly pointed out that secure pages (https) can cause duplicate content problems. I was able to confirm that at least Google is indexing secure pages.

To solve this, I removed the redirection lines from the .htaccess file and I recommend you use a separate robots.txt for with these few lines:

User-Agent: Googlebot   #Prevents Google’s robot from accessing all pages   
Disallow: /

Hamlet Batista

Chief Executive Officer

“We kept putting more energy into getting SEO audits done to understand why we started to lose positions to smaller, newer companies, but we kept losing ground. We kept trying to SEO audit recommendations, but our programmers couldn’t go fast enough.



Leave a Reply

Want to join the discussion? Feel free to contribute!

Install Our Free SEO Monitoring App Today!

It is not humanly possible to properly optimize every page of a big site. This leaves serious money on the table.


Latest News and Tactics

What do you do when you’re losing organic traffic and you don’t know why?

SEO Tactic #7: AB Testing Your Organic Search Snippets

SEO Tactic #7: AB testing your Organic Search Snippets. AB Testing Your Organic Search Snippets Meta descriptions, the little snippets that show up when you Google something…what business really needs ‘em, right? Before you answer, let’s walk a moment in a hypothetical online shopper’s shoes… Summer’s around the corner, and let’s say that 2018 is...


SEO Tactic #6: Optimize Media for Search Engines

In this tactic, we’ll be optimizing your images, videos, and PDFs for search engines. This will help your multimedia content perform better in image and video searches, which have less competition than general web searches. I’ll go over the best practices for multimedia SEO.


Request your SEO Monitoring Invitation

* indicates required

Please select all the ways you would like to hear from RankSense:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices here.