Clearing up misconceptions about the versatile robots.txt file. First in a two-part series.
It's always nice to find a new use for an old, trusted tool instead of learning a new tool from scratch. The "canonical" element, for example, is a welcome addition to the SEM's (define) tool set, but it will be several months before we start to see how it truly behaves in the wild.
Contrast that with the robots.txt file, an old tool with one of the highest ratios of benefits to learning time. Most sites use it as a machete (and it's effective that way), but armed with a few facts, you'll be able to wield it with scalpel-like precision.
This column refers to the way Yahoo, MSN/Live, and Google react to and process robots.txt directives. I isolate these three engines because almost a year ago, they all agreed to honor an expanded Robots Exclusion Protocol beyond what the original robotstxt.org protocol defined. I make no guarantees about how other engines will react, so please don't assume they'll behave exactly the same as the big three.
Pointing to XML Sitemaps
You already know you can add the location of your XML sitemap to your robots.txt file, so I'll skip the easy stuff. Many people, however, think you can list only one sitemap URL in the file. This is incorrect; you can list as many sitemaps as you have (up to a thousand, at least), including files that point to video and mobile content.
Valid sitemap files can contain no more than 50,000 URLs, so if you have 250,000 URLs on your site, dividing them up into five different sitemap files is a perfectly logical solution, and you can list each sitemap URL in a separate line in your robots file. For example:
During a relaunch, I recommend using two separate sitemap files, one each for old and new URLs. It doesn't hurt anything to list URLs that either no longer exist or now redirect to new locations.
Eventually, they may show up in Google Webmaster Tools crawling reports as errors or under the category of "too many redirects," but that doesn't hurt anything. After engines process your redirects and index your new URLs, feel free to remove references to sitemap files that contain old URLs.
Testing Robots.txt Directives
One of the recent highlights of Google's growing list of Webmaster tools is the "Analyze robots.txt" tool, which resides in the Tools section in Google Webmaster Tools' left navigation. This enables you to experiment with robots.txt content in a safe, sealed environment.
The concept is stunningly simple. In one pane, enter robots.txt directives, as they would appear in a real robots.txt file. (When you first call up the page, the first pane already contains the content of your existing robots.txt file, if your site has one.) In the second pane, insert a test URL and click "Check." The page then tells you whether your test URL is allowed or disallowed, and it tells you which line in your robots code is responsible for that status.
Google's robots.txt testing tool can be used for any site, not just the one for which you're verified. How? Simply replace the domain you want to test with the one of the verified site.
For example, if you're verified through Webmaster Tools for www.domain1.com but you want to create a robots.txt file for www.domain2.net, use the test page normally, but in your test URLs, use domain1.com instead of domain2.net. In other words, add your robots directives in the top pane as you normally would (since robots directives are domain-agnostic). If you want to test your robots directives against the URL www.domain2.net/products/reviews/1120.asp, simply enter www.domain1.com/products/reviews/1120.asp into the "Test URLs against this robots.txt" field. If it does what you want, you've written the correct directives for domain2.net's robots.txt file.
Disallowing URLs that Look Like Directories
I typically recommend a URL structure that avoids file extensions, suggesting a URL such as www.domain1.com/webmail/ instead of www.domain1.com/webmail.aspx. This gives your site a more visually friendly look in SERPs (define), and it future-proofs your URLs against redirection if you ever migrate to a different platform.
The drawback with such nomenclature is that when such URLs appear in robots.txt files, engines treat them as directories, not as unique URLs. Consequently, a line like
tells engines to exclude not only the /webmail/ URL, but every URL in that directory, such as /webmail/recover-password.asp.
But what if you want engines to index the URL /webmail/ so that employees can search for your Webmail address, but you don't want any other URLs in that directory to be indexed?
Use the $ sign as a "terminator" character. Placing this symbol at the end of a URL tells bots to view that URL only as a URL, not as a directory. Consequently, the following two lines will ensure that /webmail/ isn't excluded, but that every URL within that directory is excluded:
Disallow: /webmail/ Allow: /webmail/$
Technically, the $ sign means "any URL that ends with the preceding characters." So in the first line, you've disallowed /webmail/ as both a URL and as an entire directory. In the second line, you've "added back" /webmail/ as a URL (but not as a directory) by telling engines to allow the indexing of any URL that ends with the characters "/webmail/".
To Be Continued
There's still a great deal to discuss with the robots.txt file. Next time, we'll discuss the use of wildcards, commands that take precedence over other commands, and important misconceptions about giving different instructions to different bots.
Join us for Search Engine Strategies New York March 23-27 at the Hilton New York. The only major search marketing conference and expo on the East Coast, SES New York will be packed with more than 70 sessions, including a ClickZ track, plus more than 150 exhibitors, networking events, parties, and training days.
On the heels of a fantastic event in New York City, ClickZ Live is taking the fun and learning to Toronto, June 23-25. With over 15 years' experience delivering industry-leading events, ClickZ Live offers an action-packed, educationally-focused agenda covering all aspects of digital marketing. Register today!
Want to learn more? Join us at ClickZ Live San Francisco, Aug 10-12!
Educating marketers for over 15 years, ClickZ Live brings together industry thought leaders from the largest brands and agencies to deliver the most advanced, educational digital marketing agenda. Register today and save $500!
Erik Dafforn is the executive vice president of Intrapromote LLC, an SEO firm headquartered in Cleveland, Ohio. Erik manages SEO campaigns for clients ranging from tiny to enormous and edits Intrapromote's blog, SEO Speedwagon. Prior to joining Intrapromote in 1999, Erik worked as a freelance writer and editor. He also worked in-house as a development editor for Macmillan and IDG Books. Erik has a Bachelor's degree in English from Wabash College. Follow Erik and Intrapromote on Twitter.
Gartner Magic Quadrant for Digital Commerce
This Magic Quadrant examines leading digital commerce platforms that enable organizations to build digital commerce sites. These commerce platforms facilitate purchasing transactions over the Web, and support the creation and continuing development of an online relationship with a consumer.
Paid Search in the Mobile Era
Google reports that paid search ads are currently driving 40+ million calls per month. Cost per click is increasing, paid search budgets are growing, and mobile continues to dominate. It's time to revamp old search strategies, reimagine stale best practices, and add new layers data to your analytics.
June 10, 2015
12:00pm ET/9:00am PT