Controlling Your Robots: Using the X-Robots-Tag HTTP header with Googlebot

robopet.jpgWe have discussed before how to control Googlebot via robots.txt and meta robot tags. Both methods have limitations. With robots.txt you can block the crawling of any page or directory, but you cannot control the indexing, caching or snippets. With the robots meta tag you can control crawling, caching and snippets but you can only do that for HTML files, as the tag is embedded in the files themselves. You have no granular control for binary and non-HTML files.

Until now. Google recently introduced another clever solution to this problem. You can now specify robot meta tags via an HTTP header. The new header is the X-Robots-Tag, and it behaves and supports the same directives as the regular robots meta tag: index/noindex, archive/noarchive, snippet/nosnippet and the new unavailable_after directive. This new technique makes it possible to have granular control over crawling, caching, and other functions for any page on your website, no matter the type of content it has—PDF, Word doc, Excel file, zip files, etc.This is all possible because we will be using an HTTP header instead of a meta tag. For non-technical readers, let me use an analogy to explain this better.

A web crawler basically behaves very similar to a web browser: it opens pages hosted on web servers using a communications method called Hyper Text Transfer Protocol (HTTP). Each HTTP request and response has two elements: 1) the headers and 2) the content (a web page for example). Think of each request/response like an e-mail, where the headers are the envelope that contains, among other things, the address of the requested page or the status of the request.

Here are a couple of examples of how an HTTP request and response look like. You normally don't see this, but it is a routine conversation your browser has every time you request a page.


GET / HTTP/1.1
User-Agent: Mozilla/5.0 … Gecko/20070713 Firefox/

Connection: close


HTTP/1.1 200 OK
Date: Wed, 01 Aug 2007 00:41:47 GMT
X-Robots-Tag: index,archive
X-Powered-By: PHP/5.0.3
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8

There are many standard headers, and the beauty of the HTTP protocol is that you can define your own proprietary headers. You only need to make sure you start them with X- to avoid name collision with future standard names. This is the approach Google takes and it is a wise one.

How can you implement this?

This is the interesting part. You know I love this.

The simplest way to add the header is to have all your pages written in a dynamic language, such as PHP, and include one line of code at the top that sets the X-Robots-Tag header. For example:

<?php header('X-Robots-Tag: index,archive'); ?>

In order to work, that code needs to be at the very top of the dynamic page, before anything is outputted to the browser.

Unfortunately, this strategy does not help us much, as we want to add the headers to non-text files, like PDFs, Word documents, and so on. I think I have a better solution.

Using Apache's mod_headers and mod_setenvif, we can control which files we add the header to as easily as we do with mod_rewrite for controlling redirects. Here is the trick.

SetEnvIf Request_URI “*\.pdf$” is_pdf=yes

Header add X-Robots-Tag “index, noarchive” env=is_pdf

The first line sets an environment variable if the file requested is a PDF file. We can check any requested header and we can use any regular expression to match the files we want to add to the header.

The second line adds the header only if the environment variable is_pdf (you can name the variable anything you want) is set. We can add these rules to our .htaccess file. And voilà: we can now control which files we add the header very easily.

There are a lot of real-world uses for this technique. Let's say you offer a free PDF e-book on your site, but users have to subscribe to your feed to get it. It is very likely that Google will be able to reach the file and smart visitors will pull the e-book from the Google cache to avoid subscribing. One way to avoid this is to let Google index the file but not provide the cache: index, noarchive. This is not possible to control with robots.txt, and we can’t implement robot meta tags because the e-book is a PDF file.

This is only one example, but I am sure users out there have plenty of other practical applications for this. Please share some other uses you can think of.

10 replies
  1. Mark
    Mark says:

    I track traffic from my RSS feeds by appending tracking code to the URLs. These pages are currently being indexed by Google, hence I have a duplicate content issue. I want to prevent the URLs in my RSS (the one that have the tracking codes) from appearing in SERPs.

    Do you whether the, "X-Robots-Tag: noindex,nofollow"
    added to my .rss pages will fix this problem? Is there any other method that you'd recommend?

    Regards, Mark

    • Hamlet Batista
      Hamlet Batista says:

      Mark – Technically you don't add the X-Robots-Tag to the pages, but your webserver or some other script will need to add them to the HTTP headers.

      If you only want to block access to your RSS feeds, deny access to them via your robot.txt file.

  2. Noobliminal
    Noobliminal says:

    What if …
    I told you this has another use that can take link-exchange back to stone age? This is your white-hat use, check my dark-side use.
    Fortunately it's not accessible to every1 but still…



Trackbacks & Pingbacks

  1. […] Robots-meta-tagit ei-HTML-tiedostoissa – virallisen Google-blogin artikkeli esittelee paitsi unavailable_after -parametrin niin myös X-Robots-Tag -direktiivin jolla voidaan vaikuttaa esimerkiksi videoiden, kuvien, ääni- ja pdf-tiedostojen indeksoitumiseen. Aihetta syventävät myös Sebastian ja Hamlet Batista. […]

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *