The robots.txt file is used to instruct search engine robots about what pages on your website should be crawled and consequently indexed. Most websites have files and folders that are not relevant for search engines (like images or admin files) therefore creating a robots.txt file can actually improve your website indexation.
A robots.txt is a simple text file that can be created with Notepad. If you are using WordPress a sample robots.txt file would be:
“User-agent: *”Ãƒâ€šÃ‚ means that all the search bots (from Google, Yahoo, MSN and so on) should use those instructions to crawl your website. Unless your website is complex you will not need to set different instructions for different spiders.
“Disallow: /wp-” will make sure that the search engines will not crawl the WordPress files. This line will exclude all files and foldes starting with “wp-“Ãƒâ€šÃ‚ from the indexation, avoiding duplicated content and admin files.
If you are not using WordPress just substitute the Disallow lines with files or folders on your website that should not be crawled, for instance:
Disallow: /any other folder to be excluded/
After you created the robots.txt file just upload it to your root directory and you are done!
Kamal Hasaon says
Well the images folder that is being mentioned could be the ones like the icons and miscellaneous images.
Sue Husson says
I’m confused can someone just give me the correct text for a wordpress blog. Will uploading it from notepad work then?
gracias justo lo que necesitaba
Thanks for the post, very helpful.
Bollywood Actresson says
Although this post didn’t discuss why and how a certain robots.txt file with some certain entries is best, you made sure people should understand what those codes mean. This is something I have never seen on any other site on my quest to find the best robots.txt file for better SEO. Thank you very much for that.
thanks for info
Its nice that people are beginning to think about controlling robots with robots.txt.. You may want to look at my updated wordpress robots.txt file on AskApache, especially regarding the digg mirror, way back archiver, etc..
I think there is no harm in allowing the Bots to assecc your images folders. It can bring traffic through the Google image search
SEO Thailandon says
Yes Great Post
But can you give me tips regarding how to use sitemap effectively
John Doeon says
So many people with so many ideas!!! I like the ideas in general to block robots to avoid duplicate contents. But in my opinion duplicate contents should be avoided at the url level. Dont allow your cms to generate more then one url of the same post.
Derek, I see what you are asking now.
Thanks Daniel I know that the robots.txt file should still go in the root, I just want to know if I have to change the robots file to look inside my new blog installation directory
If I am not wrong, even if the blog in on a sub-folder, the robots.txt file should still go on the root directory since it is the first thing a search bot will look for.
I was curious, my blog isnt installed in the root but instead a folder on my side.
The particual robot.txt file is an important choice from the SEO point of view. Thanks for the original approach to the problem!
John T. Pratton says
you can’t upload robots.txt to the root directory using WordPress – it doesn’t have an FTP function. The only way it would be possible is if someone associated “robots.txt” as a theme associated file, and you could edit and save it in “theme editor” under the “presentation” tab. You wouldn’t think it would be too difficult to associate a file with a theme by modifying a little code, but presently I don’t know how to do it.
How do I upload the robots.txt to the root directory using WordPress?
John, regarding the first question: not all search bots recognize the * attribute. Some people argue that the Google Bot specifically does not interpret the * as a joker, it just ignores it. That is why I avoid using it. Plus, it should not be necessary to add it before a folder like */feed/.
John T. Pratton says
Thanks for the post, very helpful. Is there any problem with doing it like this:
Great article mate. I’m still unsure what needs to be excluded though. But there is some stuff that needs to be blocked to avoid duplicate content. Cheers
Ajay /trackback/ will disallow all the trackback pages, and I also think we should disallow comments, but I am not sure if /comment/ is the right attribute for that.
Daniel, that is not the case, rules of robots.txt are always followed no matter where the index is followed from.
One other thing to consider blocking is any duplicate content on your side. WordPress gives you about three thousand ways to access content (/page, /tag, direct links, etc). Blocking some of them might be a good idea.
Bes, if I am not the wrong the Google Image Bot will not need to crawl your image folder at all. It will crawl your pages, and it will index all the images on those pages (i.e. posts).
Bes Zon says
Thilak, unless I am mistaken, wouldn’t the Google Image Bot be following the rules of the robots.txt file regardless of where it starts crawling from?
I guess disalowing “wp-” will not affect Google Image bot from crawling your images because it crawls them from the post and not from the directory
Dawud Miracleon says
Nice post. Great reminder of how you can easily protect folders and files on your server.
I am not sure if the GoogleImage bot tracks down images from the /images/ folder or directly from the posts where the images where inserted.
Same with WordPress. If you disallow “/wp-” then it’s not going to index any of your uploads like images since they are in your wp-content folder. I get quite a bit of traffic from Google Image Search.
Mac Utopiaon says
I agree that robots.txt files are important, however I disagree with blocking the images folder. Some sites achieve some pretty good traffic numbers from google image search, blocking this directory will block your images from showing up in these results.