View-based conversions are conversions tracked based on whether a Web surfer has seen (but not necessarily clicked on) a particular ad banner before going to the Web site promoted on that banner. If you're not using them to evaluate your online media campaigns, why not?
If you don't use them, you're not alone. Our ad-serving providers tell me a majority of online advertisers still don't consider view-based conversions. As a result, advertisers dramatically underreport campaign performance.
I've heard the rationales against considering view-based conversions. Some say it doesn't make sense to attribute to an ad unit a conversion that occurs 72 hours after ad exposure, for instance. Some believe it's unlikely the exposure directly influenced the customer to convert.
Others think view-based conversions are invalid because of "outside noise," the idea that advertising in TV, radio, and/or print influenced the conversion in combination with online. Well... perhaps.
View-based conversions are valid. Check out DoubleClick's study on them, or Atlas DMT's or Advertising.com's. Here's how to determine the percentage of view-based conversions that should be attributed to your campaign (similar to how these studies did):
Establish a control group. Through an exposed/unexposed testing methodology, you can determine the difference between the conversions that would have happened anyway and those influenced by your online campaign. Dedicate a percentage of your inventory to a control group -- people who are shown an ad unit for something other than your brand. These ad units are typically for unrelated categories or nonprofit organizations. The view-based conversions you observe for this group become your baseline. This is the activity influenced by factors such as the brand's level of awareness, other media exposure (TV, radio, print, etc.), brand loyalty, or word of mouth.
Create a test group. This group will be exposed to your brand's online advertising. By monitoring their view-based conversions, you'll likely see a higher number of conversions. That's what you'll want to see, at least, as that will mean your online advertising does influence conversion activity, even in the absence of a click-through.
Arrive at a conversion factor for view-based conversions. Suppose your control group yields 50 view-based conversions. It should yield zero click-based conversions, since the ads aren't for your brand. Your test group then yields five click-based conversions and 60 view-based conversions. One way to arrive at a factor you can apply to future campaigns is to take the difference in the number of view-based conversions between the groups and divide it by your click-based conversions.
In this case, divide 10 (the difference in view-based conversions between your control group and your test group) by 5 (click-based conversions from your test group), and you get 2. In the future, you'll multiply your click-based conversions by two to determine the view-based conversions you can take credit for.
Control for variables. You must observe the control and test groups within the same period. In addition, you'll want to control impressions levels, frequency, and targeting so they're consistent between the two groups.
Test different time windows during which you'll observe the conversion patterns. Determine what window is appropriate for your brand. Try windows of 1 hour, 24 hours, 48 hours, 72 hours, and 30 days. Consider such things as your buying cycle: is this a purchase that takes a little consideration or a lot? By observing conversion activity distribution during the different windows, you should see patterns to help you define the right time window for you.
Some people aren't comfortable attributing a conversion that occurs 30 days out to an ad exposure; there are too many other factors that could influence that conversion over that time span. If you aren't comfortable with this long period, take a more conservative approach. Even giving yourself credit for what takes place in the first 24 hours should dramatically improve your campaign performance metrics.
Depending on what you're trying to learn and how granularly you're trying to mine the data, you'll need varying sample sizes. Work with your ad-serving partner or an ad network to help ensure you have a statistically sound test.
Get a test going. You'll find your campaigns are performing much better than you've been able to demonstrate in the past. You may also find placements you've eliminated through your previous optimization efforts are actually worth reincorporating into your plans.
If you do run a test, I'd love to hear about your results and the conversion factors you come up with. Me, I can't wait until measuring campaigns in this way is the rule rather than the exception.
Pete is off this week. Today's column ran earlier on ClickZ.
Pete Lerma began his advertising career in the traditional side of the business, where he spent six years managing accounts for clients such as Coca-Cola and Subway. He then realized interactive marketing was where it's at and, in 1998, joined Click Here, The Richards Group's interactive marketing division. During his tenure at Click Here, he's forged relationships with major online publishers, networks and technology companies, and these relationships contribute to his perspective on the interactive marketing industry. As Click Here's principal, Pete oversees accounts for high profile brands including Atlantis, Hyundai, Travelocity, and Zales. His group has won numerous awards for their strategic and creative work, including recognition from the IAB, Ad:Tech, The One Club, Graphis, and Communication Arts. Pete serves on the board of directors for the Dallas/Fort Worth Interactive Marketing Association and also contributes to the marketing blog ChaosScenario.