Your Marketing Data Sucks

The Interactive Advertising Bureau – the single most consistent and important source of guidance for our industry – has just released a study into an advertising research method that many (all?) of us have used at some point: website intercepts. Its finding? Not so good.

Ends up that the practice of catching people on the Web and asking their opinions about brands, products, and (most importantly) ads has some serious flaws. In fact, the report points out three problems that are most prevalent and serious:

  • Low response rates: This is clearly an issue that can kill any real research. If you don’t get enough people to talk to, you can’t make any reasonable claims. When you see a piece of research, such as a poll, for example, there is always a margin of errors. This is done through a fairly simple calculation that requires the population size (the total number of people you are talking about) and the sample size (the total number of people you actually talked to). You can figure this number by yourself, or with any of the many online calculators available. The population size is fixed, but the sample size is dependent upon how successful you are at reaching people; if you are unsuccessful, than the margin of error is huge, and your study is worthless.
  • Untested research methodologies: This issue is the subject of debate, especially in some forums that focus on advertising data. While there are clearly well-established methods for performing research, that isn’t necessarily a static list. The IAB is being a bit of a stickler for tradition in this case, which is certainly the safe thing to do. But methodology – the way in which you ask questions and analyze response – is really a matter of understanding and clarity. I like to apply the common-person rule: if the methodology seems reasonable to a reasonable person, then it can be acceptable.
  • Improper weighting: Weighting is a key analytic task that gets farther into the art than the science of numbers. Certain responses and respondents are more important than others. Also, some key data points need to be given more of a representation in an overall sample. Determining which points those are and how much you should weight them is essentially taking something that we know (women know more than men about household finances) and putting it into the math formula. But that weighting needs to come from something provable and rock solid. The weighting going on, in the eyes of the IAB, isn’t that.

These are certainly not problems that are unique to online advertising research. In fact, many studies, academic, political, and otherwise, fall prey to these precise issues.

But, with online advertising and marketing, I think we need to ask the single most important question about research: why are we bothering?

Hunter Culture and Farming Culture

A key moment in the development of modern society was when humans began to hunt for their food less frequently, instead growing it in their own backyards. We’re on the cusp of experiencing a similar shift in advertising research. Over time, we should become less reliant on surveys and other data hunting tools and more reliant upon capturing and understanding the data that’s within our own (digital) backyards.

All of the problems related to research methods stem from the fact that we – as researchers – were not able to directly observe the consumer engaging with the product (or the advertisement of that product), nor were we able to understand anything more about that person, solely from their connection with our brand.

But very rapidly, that’s becoming untrue. We are overwhelmed by data today about the consumer. We can directly observe their engagement with our advertising, we can see actions they take after seeing that ad, and we can learn tons about them through social networks. As interactive technology continues to spread throughout our lives thanks to super-powerful mobile tools, that set of data is going to grow and grow.

Why, then, would we invest large amounts of time in hunting for more data? Why would we invest in hunting for more data that potentially has problems?

The simple answer is that we still don’t quite know what to do with all this data, let alone how to do it. Plus, all of this data tends to reside on different servers, in different formats. Blending data sets together today feels a lot like mixing oil and water.

We are, however, on a path that’s leading us to a more comfortable place where we can be much more effective with this data. The platforms are slowly moving toward one another and, especially as they consolidate, we all have hopes they will get better and the data will be more flexible and usable.

I truly hope the IAB is considering this, as it goes deeper into this issue. One of the action items it has identified out of this research project is the formation of a task force to tackle the problem. But I hope it isn’t going to solely focus on how to get intercepts to work better, but rather look at how to better understand consumers.

Otherwise, I’m afraid we’ll simply have recommendations for improving the buggy whip.

Related reading

Overhead view of a row of four business people interviewing a young male applicant.