A Resource for Coping With Microsoft .NET
A new book helps SEO professionals handle duplicate content, clumsy URLs, and other side effects of ASP.NET Web pages.
A new book helps SEO professionals handle duplicate content, clumsy URLs, and other side effects of ASP.NET Web pages.
With roughly a half billion ASP.NET (define) pages indexed by Google, Microsoft’s .NET framework (define) is one of the most popular development environments currently in use. With Microsoft’s marketing machine behind it, its usage is particularly targeted toward larger sites and bigger brands.
I’m pretty familiar with many of the SEO (define) problems the .NET framework can create, such as unfriendly URLs, massive content duplication, and session ID hassles (especially when .NET teams up with its sister product Microsoft Commerce Server).
I wanted to speak more intelligently about the solutions to those problems, however; so I recently picked up Wrox’s “Professional Search Engine Optimization with ASP.NET.” It’s a good resource to help both SEO professionals and SEO-minded Webmasters deal with the issues that face them.
The book isn’t a comprehensive ASP (define) resource, and it’s not a comprehensive SEO resource, either. It doesn’t intend to be. Instead, the book addresses the small but critical union of those two sets. Picture a Venn diagram (define) with .NET knowledge in one set and SEO knowledge in the other.
Following is a brief overview of just some of the book’s major components. I recommend checking out the full table of contents at the link above.
URL Structure
When done properly, correctly written (or rewritten) URLs can offer sites a big boost. The book devotes chapter 3, as well as parts of other chapters, to generating search-engine-friendly URLs. URL rewriting topics include URL rewriting with IIS (define) and ISAPI_Rewrite, creating ISAPI (define) filters, and ordering parameters.
I appreciate how authors Christian Darie and Jaimie Sirovich have injected technical discussions with SEO philosophy, such as when they explain how smart usage of the robots.txt file can quickly quell some duplicate content issues. Yet they don’t address the problem of accrued internal link equity. They also point out that although proper URL structure is very important, readers shouldn’t necessarily mess with long-established URLs if they already perform well. This type of cross-trained insight fits perfectly with the book’s mission.
Duplicate Content
URL structure and duplicate content are closely aligned. Poorly written and maintained URLs can frequently result in content duplication. The .NET framework often wants to create URLs that refer to the content on the page and to the content from which the user arrived, such as /default.aspx?productID=26&frompage=home and /default.aspx?productID=26&frompage=products.
Both URLs (and many more like them) serve identical content. This can cause significant problems with dividing internal link equity.
The book delves into how to avoid such navigation-based URLs using such methods as the robots.txt file and the robots meta tag, as well as a good discussion about how, when, and why many query string parameters can be eliminated entirely.
Cloaking, IP Delivery, and Geotargeting
We have a joke around our office: “Cloaking is always bad, except when it’s not.” This is meant to underscore the humor of search engines’ grand, sweeping statements about cloaking and how they contrast with cherry-picked instances when they don’t apply.
While the book does discuss the ethical debate surrounding cloaking, it thankfully doesn’t dwell on the issue. Chapter 11 lays out some pros, cons, and industry opinions, but it quickly shifts over to lay out the technical aspects and lets readers decide whether it’s a smart move for them.
The book covers additional topics, such as search-friendly pop-up windows, forms, and menus, as well as how to write JavaScript that stands a far greater chance of being spidered than typical script code.
Conclusion
SEM (define) firms are pretty lucky when they’re not required to offer consultation beyond the diagnostic phase. Identifying SEO obstacles can be difficult, but it’s nice when the client’s IT shop knows how to easily address them.
Increasingly, though, IT departments and personnel are inheriting large site infrastructures that were created either by outside firms or by previous employees. Consequently, they’re good at maintaining the legacy systems but are unsure how to fix the problems we identify. Books like this one may not offer a solution for every problem you encounter, but they go a long way toward satisfying the due diligence you owe your clients.
Join us for SES Search Engine Marketing Training Workshops on May 6, 2008, at Crowne Plaza Denver in Colorado.
Want more search information? ClickZ SEM Archives contain all our search columns, organized by topic.