Finding Link Targets

For the most part, link building is all about taking steps to increase the number of high-quality inbound links to a Web-based document in order to raise its visibility in search engine results for targeted phrases.

In my last few columns, I’ve reviewed some handy link analysis tools that can help organize and analyze the Web of links influencing search engine results based on queries for the example, [exercise equipment]. Once the linking landscape has been defined, it’s time to commence with finding link targets in preparation for building a multi-faceted link-building strategy.

So where do I start when finding link targets? Usually I begin by reviewing the keyword research. Keywords provide the foundation for most successful link-building strategies. I usually sort my keyword research data in several ways. I want to know which keywords and phrases convert for a Web site and seek out opportunities to expand this list by way of link building for targeted terms — especially those terms for which rival sites rank better.

I like to focus on two or three themes at a time because trying to get a variety of text links in place for 500 different terms can become quickly unmanageable. I will usually take into account those terms that are considered “trophy phrases,” too. But if these phrases include keywords that are not part of a Web site’s current vernacular, I’ll move these keywords to the wannabe column and get to work developing content that can be the focus of another wave of link-building initiatives.

One of the best places to looks for links is a Web site’s log files. Generally speaking, it’s a lot easier to grow more referrers from sites already sending traffic my way. Next, I’ll review links of top ranking competitive sites and find what’s missing from my mix. Eventually, a seed list is formed that is perfectly aligned with my targeted keywords and phrases. The list might include a few directories, local niche associations, blogs and news sites, among other link targets.

Initially, I like to focus finding keyword and link targets for those pages that have already landed on page-two or page-three results in the search engines. A few solid links from good linking neighborhoods can make all the difference in the world in search engine results. I generally consider the pages that are closest to page-one to be low-hanging fruit when it comes to link building.

But this is just one method of improving a Web site’s performance in the search engines. Remember, link building is not a set-it-and-forget-it series of tactics. That’s why I need to continue to use link analysis tools to monitor my link-building performance. That way I can understand the return on investments made to develop content, build widgets, or create “link bait” that can help drive traffic and build links on a larger scale.

Of course, not all backlinks are weighted equally. Links that come from authoritative sites or trusted hubs provide better quality links than those from link-deprived Web sites. There are good and bad linking neighborhoods. Those that are deemed to be so-called bad neighborhoods are the likes of “free for all link exchange programs” and Web sites involved in search engine spam tactics.

Target text links from .edu and .gov sites because they are inherently more trusted than .com links and generally considered to reside in a good linking neighborhood. It’s also important to target building inbound text links from a variety of sites at different IP addresses — not from “reciprocal link rings” — and to be able to mix up the anchor text a bit based on targeted themes.

Completely symmetrical link structures should be avoided since search engines give a higher value to one-way links to a Web site. That’s something to remember when working with a network of sites. Natural linking patterns are inherently scattered and random; not structured or systematic. Also, good text links need to come from link targets relevant to a Web site’s content, not selected and targeted for PageRank transfer alone.

Learn how to spot the fakes — those sites milking their link juice for all its worth. For example, take a look at a link target’s robots.txt files to see if exclusions might be an issue. The same holds true for meta-x tags can render links somewhat valueless. Other fake link targets that I try to avoid include those that have:

  • JavaScript or hinky redirects that transfer PageRank to another destination before landing on the targeted destination. I usually act like a bot, turn off my JavaScript, or use header checkers to root out these fakes.
  • Commented links or any hidden links on a page because these tactics are bad linking signals worth avoiding. It’s easy to spot the tiny type in white space gaps on a page by using the simple “ctrl-a” function on a computer.
  • And iFrame links that search engine spiders cannot crawl efficiently since the content nestled with a frameset does not extend beyond the home page.

Just because some Web sites put a rel=”nofollow” tag on text links doesn’t mean that the link target is useless; it could still drive traffic that converts my way. Sure, the spiders have essentially been instructed not to follow the link or pass on the link love. But that doesn’t mean that a shopping comparison engine or authoritative hub like Wikipedia won’t make my list of link targets.

With good neighborhood link targets in place I’m ready to go fetch some links using the appropriate method of acquisition for each link-building target. Then it’s back to the keyword research and site analytics to start all over again on a different set of keyword themes. Rank checkers and internal site metrics make it relatively easy to monitor the performance of a link-building strategy. I hope you enjoy building yours.

Related reading

Brand Top Level Domains