Imagine a sealed box with a cat inside. The box is rigged with a device which has, with equal probability, either killed the cat or let it live. You don’t know whether the cat is alive or dead until you open the box.
According to quantum theory, until you open the box, the cat is neither alive nor dead; it is in a blurred state between the two. The theory being that everything fluctuates between two states, “collapsing” into one only when you measure it. The same can be said for integrated marketing.
Many claim to be implementing integrated media strategies. But in a recent study, 70 percent of senior executives said they’re not using cross-media audience reporting or measurement, and 57 percent said they never or infrequently compare online and offline media performance.
How can a campaign be successfully integrated if you don’t calibrate which channels work best in achieving specific objectives?
It can’t. What the cat in the box teaches us is that until we can measure something, it isn’t happening. Unless we measure the impact of online versus offline advertising in a campaign, and see how both efforts can complement one another, the world of offline and online marketing aren’t truly integrated. Neither dead nor alive, just like the cat.
The lack of comparative data that forms the basis of integrated marketing is one of the main reasons why big advertisers have yet to invest heavily in the web. At conferences, seminars, and in everyday interactions with our clients, I hear the same question: How can we better compare the offline and online performance of our marketing dollar?
Unfortunately, there is no magic bullet research tool that will tell us whether the money we spend on a TV ad, say, would be better spent sponsoring a web site.
The question of effectiveness itself is complicated. Offline spending might be better at boosting brand awareness among a mass audience; online spending might be more effective at pushing certain brand attributes or acquiring customers cheaply. Rather than saying which channel is better, we must use research to understand which channels are more effective at accomplishing specific objectives for specific audiences.
There is a difference, too, between comparing the behavioral and attitudinal impact of marketing efforts.
Behavioral impact means inducing someone to do some thing, like signing up for a promotion, using a coupon, or buying a product. Comparing the cost for acquiring data or profitable customers between two media can be a fairly simple affair.
But to compare the impact of online and offline advertising on the target’s attitudes – in terms of values like brand awareness, brand perception, and consideration to purchase – can be more difficult.
Online, there are some highly effective ways to measure the attitudinal impact of brand advertising (see my article, Common Sense Branding Research). But comparing results with offline research is a challenge in integration in itself.
Basically, there are three ways to compare the attitudinal and behavioral effects of on- and offline marketing.
If you or your client have deep pockets, you can do a huge study that measures the use of both Internet and traditional media and compares that data to attitudes and purchase.
We work with one of our clients to track interactive advertising going to a large panel of consumers, all of whom record every purchase they make with a scanner. We watch to see if our advertising is making them buy more stuff.
Unfortunately, most interactive research budgets can’t accommodate such extensive efforts. But many advertisers do invest a lot in offline research.
Often it makes sense to leverage this research by augmenting these studies with questions about Internet usage. This can be an inexpensive way to correlate and compare consumption of different media with behavior and attitudes.
A third option is to craft a research plan which carefully leverages a number of studies both on- and offline to measure and compare the impact of your marketing investment (see Building A Research Mosaic).
This approach requires vigilant attention to research methodology; comparing dissimilar data sets can be misleading. But a well-designed research plan is often the most illuminating and nimble of the three options.
It’s not easy. But until clients are consistently offered the insight that comparative research provides, they won’t be making the substantial investments on the Internet they should. And that’s not good for our industry. You don’t need to be a quantum physicist to know that.