Measuring and Analyzing Persuasion Scenarios

  |  March 17, 2006   |  Comments

Why, in Web analytics, does no one ask, "Why?"

My last column broke down the elements of a persuasion scenario and highlighted the need for more robust scenario planning, rather than today's overly simple conversion funnel.

Marketers are venturing into new, unfamiliar territories, juggling multichannel strategies, allocating ad dollars among media, and trying to maximize new technologies, all at a time when consumers seem to find it easier than ever to bypass and ignore those efforts. These are just a few of the challenges.

Though marketing choices have increased, ad budget dollars haven't. The answer can't be to A/B test everything and see what sticks. Even with an infinite budget, do you really want to just throw anything up against the wall?

Great marketing has always been tough, though never as complex as it is now. In a world of thousands of marketing allocation and messaging choices, businesses require a methodology for modeling customer behavior if they're to maintain a lead. Persuasion architecture methodology will guide them in planning and prioritizing marketing tactics and leave no customer modality behind. Still, that's not enough.

As traffic costs inflate, accountability increases too. Persuasion scenarios, what used to be called "marketing plans," must be accountable. Just as planning a robust scenario requires a willful, thorough execution, we must move beyond reporting metrics to actually analyzing data. The goal isn't to simply jockey reports but to determine why behaviors occur.

Get Results, Ask Why

I talked to Jim Novo, author of "Drilling Down: Turning Customer Data into Profit with a Spreadsheet," recently. Here's what he had to say:

In the Taguchi/multivariate world, everyone seems to be asking for that black-box answer, and there is no learning taking place. So continuous improvement will hit the wall. There will only be wildly random testing across hundreds of variables that produces suboptimal improvement.

Instead, when you get a result, you should ask, "Why?" When you answer that question, you ask, "But why?" And you keep asking why, typically five times, until you get to the root "why," which often will deliver the best insight.

That's the difference between analysis and reporting. In Web analytics, it seems as though nobody is asking, "Why?" For example, when I see the results of testing of colors, say a light blue versus a light green background, produce staggeringly different conversion rates, I always ask myself, "Why?" I mean, assuming the display of text was normal and readable against both backgrounds and given a general knowledge of the business, how could color itself have this kind of effect?

I know people have studied color to death and point to its influence, but, really, can it have more influence than copy or offer or source? Really?

Why? It never had this power offline, so I don't see why it should be true online. When Taguchi/multivariate points to color as a controlling variable, people just accept it as reality. Nobody asks why once, let alone five times. That's the difference between reporting and analysis.

From an analytical perspective, I would be skeptical of this result and point to the fact that color simply may be accidentally "riding along" with another more influential variable in the combinations tested. I would insist on a true A/B/ control on color to confirm the multivariate output. And you know what? Maybe color would prove to be a controlling variable, but I'd guess with much less influence than the black box said it had. And at least I will have learned something definitive that I can use in the future.

Just as there's no marketing black box, there's no magic machine that spits out optimization answers. Like anything else worthwhile, strong marketing and successful analysis are fueled by sweat. When was the last time you found a product online you were ready to buy, only to look at the "add to cart" button, and say, "No, I don't like the shade of blue of this button. I'm going to find a site with a green button and buy there"?

Measuring Persuasion Scenarios

Persuasion scenarios, as I outlined last time, are designed to be measurable and accountable to business goals.

I asked John Quarto-vonTivadar, author of the forthcoming book, "Persuasion Architecture," to explain further:

In order to achieve both measurability and accountability, we have to know where we're at and plan where we're headed. Too often the marketing plan is at odds with the sales plan. Marketing gets the consumers aware and then the sales team doesn't follow through. Or the sales team has a specific organized plan for selling, and no one tells the marketing team.

So when we talk about measurability and accountability, we're really saying the entire organization's efforts are planned persuasively from the start so both marketing and sales functions are met in a way that supports customers' needs. In short, we plan for persuasion and try to remove the obstacles of silo thinking.

Having planned and implemented -- both are explicit, conscious decisions on the part of the company -- we immediately enjoy the benefit of being able to measure our success (or lack thereof) and to hold the appropriate team members responsible when bonuses and pink slips are handed out.

If we plan, then we can effectively measure; and if we measure, we can effectively improve. That's a key statement, so I encourage you to read it again.

What's interesting is that the average marketing executive or metrics analyst reading that statement has understood only the second half of it. He's beaming at how much he's measuring. That's unfortunate because... he's... reasoned, incorrectly, that because he is measuring, he must therefore be planning. Logicians call that "assuming the antecedent"....

Let's try an easier one that you can see is false: we start with, "If it rains tomorrow, I should bring an umbrella. If I bring an umbrella, I won't get wet." Now your marketing friend walks in and says, "I brought my umbrella, and I didn't get wet!" Does this mean that it rained? Of course not. Could it have rained? Yes. Could it have stayed dry? Yes. In other words, you haven't got the foggiest idea what the weather was like. That's what all these analysts sound like when they confuse rote analysis with planned intention.

I know one or two readers are many steps ahead. They're now questioning the underlying assumptions of your marketing efforts. Is it you? Did you begin thinking, "Just because I improved, it doesn't mean I planned. So now what? How do I continue to improve?"

Congratulations you two (and your companies). You're the long-term winners. Planning is first and foremost. From it flow effective measurement and continued improvement. That's persuasion architecture's focus.

ClickZ Live Chicago Join the Industry's Leading eCommerce & Direct Marketing Experts in Chicago
ClickZ Live Chicago (Nov 3-6) will deliver over 50 sessions across 4 days and 10 individual tracks, including Data-Driven Marketing, Social, Mobile, Display, Search and Email. Check out the full agenda and register by Friday, Sept 5 to take advantage of Super Saver Rates!

ABOUT THE AUTHOR

Bryan Eisenberg

Bryan Eisenberg is coauthor of the Wall Street Journal, Amazon, BusinessWeek, and New York Times bestselling books "Call to Action," "Waiting For Your Cat to Bark?," and "Always Be Testing." Bryan is a professional marketing speaker and has keynoted conferences globally such as SES, Shop.org, Direct Marketing Association, MarketingSherpa, Econsultancy, Webcom, SEM Konferansen Norway, the Canadian Marketing Association, and others. In 2010, Bryan was named a winner of the Direct Marketing Educational Foundation's Rising Stars Awards, which recognizes the most talented professionals 40 years of age or younger in the field of direct/interactive marketing. He is also cofounder and chairman emeritus of the Web Analytics Association. Bryan serves as an advisory board member of SES Conference & Expo, the eMetrics Marketing Optimization Summit, and several venture capital backed companies. He works with his coauthor and brother Jeffrey Eisenberg. You can find them at BryanEisenberg.com.

COMMENTSCommenting policy

comments powered by Disqus

Get the ClickZ Analytics newsletter delivered to you. Subscribe today!

COMMENTS

UPCOMING EVENTS

Featured White Papers

IBM: Social Analytics - The Science Behind Social Media Marketing

IBM Social Analytics: The Science Behind Social Media Marketing
80% of internet users say they prefer to connect with brands via Facebook. 65% of social media users say they use it to learn more about brands, products and services. Learn about how to find more about customers' attitudes, preferences and buying habits from what they say on social media channels.

Marin Software: The Multiplier Effect of Integrating Search & Social Advertising

The Multiplier Effect of Integrating Search & Social Advertising
Latest research reveals 68% higher revenue per conversion for marketers who integrate their search & social advertising. In addition to the research results, this whitepaper also outlines 5 strategies and 15 tactics you can use to better integrate your search and social campaigns.

Resources

Jobs

    • Digital Marketing Analyst
      Digital Marketing Analyst (GovLoop) - Washington D.C.Are you passionate about audience acquisition? Love effective copy and amazingly effective...
    • Product Specialist
      Product Specialist (Agora Inc. ) - BaltimoreDescription: The Product Specialist is hyper-focused on the customer experience and ensures that our...
    • Partnerships Senior Coordinator
      Partnerships Senior Coordinator (Zappos.com, Inc.) - Las VegasZappos IP, Inc. is looking for a Partnerships Senior Coordinator! Why join us? Our...