Create a robots.txt file
The robots.txt file is used to instruct search engine robots about what pages on your website should be crawled and consequently indexed. Most websites have files and folders that are not relevant for search engines (like images or admin files) therefore creating a robots.txt file can actually improve your website indexation.
A robots.txt is a simple text file that can be created with Notepad. If you are using WordPress a sample robots.txt file would be:
“User-agent: *”Â means that all the search bots (from Google, Yahoo, MSN and so on) should use those instructions to crawl your website. Unless your website is complex you will not need to set different instructions for different spiders.
“Disallow: /wp-” will make sure that the search engines will not crawl the WordPress files. This line will exclude all files and foldes starting with “wp-“Â from the indexation, avoiding duplicated content and admin files.
If you are not using WordPress just substitute the Disallow lines with files or folders on your website that should not be crawled, for instance:
Disallow: /any other folder to be excluded/
After you created the robots.txt file just upload it to your root directory and you are done!
Browse all articles on the SEO category
50 Responses to “Create a robots.txt file”
I’m confused can someone just give me the correct text for a wordpress blog. Will uploading it from notepad work then?
gracias justo lo que necesitaba
Thanks for the post, very helpful.
Although this post didn’t discuss why and how a certain robots.txt file with some certain entries is best, you made sure people should understand what those codes mean. This is something I have never seen on any other site on my quest to find the best robots.txt file for better SEO. Thank you very much for that.
I hope you would take the pain to come with a robots.txt file and explain why and how it is better, which would really help a lot of people like me. Thanks again.
thanks for info
Its nice that people are beginning to think about controlling robots with robots.txt.. You may want to look at my updated wordpress robots.txt file on AskApache, especially regarding the digg mirror, way back archiver, etc..
I think there is no harm in allowing the Bots to assecc your images folders. It can bring traffic through the Google image search
Yes Great Post
But can you give me tips regarding how to use sitemap effectively
So many people with so many ideas!!! I like the ideas in general to block robots to avoid duplicate contents. But in my opinion duplicate contents should be avoided at the url level. Dont allow your cms to generate more then one url of the same post.
I have posted my thought on my blog. Have a look.
Derek, I see what you are asking now.
I think both
would work fine.
Thanks Daniel I know that the robots.txt file should still go in the root, I just want to know if I have to change the robots file to look inside my new blog installation directory
If I am not wrong, even if the blog in on a sub-folder, the robots.txt file should still go on the root directory since it is the first thing a search bot will look for.
I was curious, my blog isnt installed in the root but instead a folder on my side.
so for instance instead of /wp-admin I would have /blogfolder/wp-admin
Does that change things, should I set it up differently, could you give me a hand?
The particual robot.txt file is an important choice from the SEO point of view. Thanks for the original approach to the problem!
I’m going to check some other txt…
John T. Pratt
you can’t upload robots.txt to the root directory using WordPress – it doesn’t have an FTP function. The only way it would be possible is if someone associated “robots.txt” as a theme associated file, and you could edit and save it in “theme editor” under the “presentation” tab. You wouldn’t think it would be too difficult to associate a file with a theme by modifying a little code, but presently I don’t know how to do it.
Otherwise, you have to do it with an FTP program.
How do I upload the robots.txt to the root directory using WordPress?
John, regarding the first question: not all search bots recognize the * attribute. Some people argue that the Google Bot specifically does not interpret the * as a joker, it just ignores it. That is why I avoid using it. Plus, it should not be necessary to add it before a folder like */feed/.
Secondly, I am also not sure about the /wp- working for all folders that begin with that. In fact, on my latest robots.txt file I added all the folders that were to be excluded individually like /wp-admin/ and /wp-includes/.
John T. Pratt
Thanks for the post, very helpful. Is there any problem with doing it like this:
Also, when you do:
don’t you need a star, like:
or is it better to just do this:
Also, now with sitemap inclusion, you should consider updating this to have:
in there for those using the google sitemap plugin.
Great article mate. I’m still unsure what needs to be excluded though. But there is some stuff that needs to be blocked to avoid duplicate content. Cheers
Ajay /trackback/ will disallow all the trackback pages, and I also think we should disallow comments, but I am not sure if /comment/ is the right attribute for that.
Secondly, regarding the indexing of images, it is true that with the /-wp the images on your upload folder will not be indexed, but they do not have to. I still think that Google indexes images through web pages, regardless of whether it crawls the image folder or not. I will research more about it though.
Daniel, that is not the case, rules of robots.txt are always followed no matter where the index is followed from.
Excluding wp- and wp-content will block out indexing of your images.
You will need to seperately allow the folder you want.
I suggest using Webmaster Tools to analyze your robots.txt to give you a proper understanding of what is allowed / blocked.
Also I am not sure if /trackback/ will disallow everything with /trackback/ can you get this confirmed?
One other thing to consider blocking is any duplicate content on your side. WordPress gives you about three thousand ways to access content (/page, /tag, direct links, etc). Blocking some of them might be a good idea.
But I’m not an SEO and I have no idea what should be blocked.
Bes, if I am not the wrong the Google Image Bot will not need to crawl your image folder at all. It will crawl your pages, and it will index all the images on those pages (i.e. posts).
Thilak, unless I am mistaken, wouldn’t the Google Image Bot be following the rules of the robots.txt file regardless of where it starts crawling from?
I guess disalowing “wp-” will not affect Google Image bot from crawling your images because it crawls them from the post and not from the directory
Nice post. Great reminder of how you can easily protect folders and files on your server.
I am not sure if the GoogleImage bot tracks down images from the /images/ folder or directly from the posts where the images where inserted.
In fact if I think the latter option is true, because the content of the page, the keywords and the alt tag play an important role on the image search algorithm.
In that case Disallowing the image folder should not affect your incoming traffic from image searches.
Same with WordPress. If you disallow “/wp-” then it’s not going to index any of your uploads like images since they are in your wp-content folder. I get quite a bit of traffic from Google Image Search.
I agree that robots.txt files are important, however I disagree with blocking the images folder. Some sites achieve some pretty good traffic numbers from google image search, blocking this directory will block your images from showing up in these results.
Well the images folder that is being mentioned could be the ones like the icons and miscellaneous images.
as far as the normal images are concerned it should and needs to be crawled by Google. My blog receives more than 30% of the traffic from Google Images presently.
- Kamal Hasa
Comments are closed.