We're thrilled that Google's getting into the testing game with a new service called Website Optimizer. The service will be gratis to all Google advertisers. Just as Google Analytics had a major effect on how e-tailers viewed analytics, so this service will open the world of testing to a much broader audience. Testing is more action-oriented (and should therefore appeal to even more people) than straight analysis. However, some level of analysis is still required.
This is long overdue. We were fortunate to be an early beta tester of the system and are impressed with several features.
Making a decision to test is simple. But making that decision alone won't deliver better online results.
In over 10 years of optimizing sites for our clients, we've identified over 1,100 factors that contribute to a customer's ability to successfully complete a single conversion funnel.
Multiply that by the number of campaigns, offers, products, keywords, visitor motivations, visitor types, and several other elements and the number of contributing factors becomes astronomical. When you consider most of these factors as potential variants to test and optimize for, you must conclude determining what and how to test campaigns for maximum return takes plenty of thought, planning, and effort.
Fortunately, not all factors are equal in their ability to drive success. There are many things you can do to stack these factors in your favor.
If you're new to testing, begin with A/B testing rather than multivariate. Although it may feel limiting and takes more time, you're likely to get more sound scientific data with which to determine your optimization efforts. It also allows you to gain experience testing with proper methods.
Testing is a science, not an art. Unlike the intuitive creative process, a test must conform to well-established scientific design to be truly effective. As I've stated before, most of today's so-called A/B and multivariate tests are nothing more than marketers throwing landing page, banner ad, and AdWord variations at a wall to see what sticks, wasting valuable time and money with little to no conversion increases to show for it. A key to avoiding that is to have a better handle on "fitness factors." In his A/B testing whitepaper our CTO, John Quarto-vonTividar, explains:
Let's say we want to determine whether Nolan Ryan is a better baseball player than Homer Simpson. How should we proceed? First, we might set a metric for what we mean by a "better" baseball player. We can measure evidence in concrete ways, noting the two subjects' different batting averages or RBIs or the like. What we're searching for is the right metric -- a formula that would lead us to a correct decision. Such a formula is more precisely termed a "fitness function."
We might decide that considering indirect evidence will lead us to a better decision than comparing pure statistics. In that case, our fitness function may involve such things as the difference in salary paid for services or a comparison of the prices paid for our subjects' autographs on eBay.
In virtually all such measures, Nolan is the better candidate. If you were choosing a player for your team, you'd certainly pick Nolan; you can be confident you've made the correct decision.
But let's think on that a moment: the reason you feel confidence in signing Nolan stems from your familiarity with the metric and fitness function that are implicitly applied when we speak of baseball. Your decision might be quite different if we want to pick an effective donut quality assurance taster. Suddenly, Homer Simpson is back in the running.
Even then, your confidence may be based on your understanding that "tastes better" is the donut metric and that Homer Simpson is an acknowledged expert in donut consumption. But what is the fitness function? That is, what does it mean to "taste better"? Are you relying solely on Homer's reputation as an expert? But his expertise is based on consumption quantity so perhaps you suspect he enjoys all donuts equally and actually has little, uh, "taste" at all. In other words, it's quite possible you don't have any knowledge at all of what we might call the "donut tastiness" fitness function.
Interestingly, marketers and business owners are asked -- every day -- to make more important decisions with less information with an undetermined fitness function.
More formally, A/B Testing first requires a metric be identified (that is, "what will be contrasted?"). Second, a fitness function describing that metric is agreed upon ("how will we measure and contrast the differences?"). And third, an optimization step where the system is tweaked based on comparison of exactly two tested solutions, which differ in only one respect of how they meet the fitness function.
Here are few steps, then, that will keep you from merely throwing stuff at the wall.
Define the Conversion Goals
The fitness factors that will determine a successful campaign:
Know Your Customers
Do the Creative
Create the pages, PPC (define) ads, e-mail, or ads (online and off-) that drive your prospects to your landing pages. Always try having someone different than you normally would use for a portion of the variations. Ask:
Test and Optimize
Thankfully, the financial costs related to the software side of A/B and multivariant testing are about to be out of the way. The only remaining costs are the creative variations. They're affected by how much you spend thinking in the first two steps. More time thinking usually means less time coming up with meaningless variations.
A lot of the testing is done by producing the creative first while skimming (sometimes skipping) over step two and poorly identifying goals and fitness factors. Don't make the same mistake.
Share with us some of your testing challenges.
Know your Ambiguous Customer: Effective Multi-Channel Tracking
Wednesday, June 5 at 1pm ET - Learn why a move from the "batch and blast" email approach enables better conversations with your customers.
Register today - don't miss this free webinar!
Bryan Eisenberg is coauthor of the Wall Street Journal, Amazon, BusinessWeek, and New York Times bestselling books "Call to Action," "Waiting For Your Cat to Bark?," and "Always Be Testing." Bryan is a professional marketing speaker and has keynoted conferences globally such as SES, Shop.org, Direct Marketing Association, MarketingSherpa, Econsultancy, Webcom, SEM Konferansen Norway, the Canadian Marketing Association, and others. In 2010, Bryan was named a winner of the Direct Marketing Educational Foundation's Rising Stars Awards, which recognizes the most talented professionals 40 years of age or younger in the field of direct/interactive marketing. He is also cofounder and chairman emeritus of the Web Analytics Association. Bryan serves as an advisory board member of SES Conference & Expo, the eMetrics Marketing Optimization Summit, and several venture capital backed companies. He works with his coauthor and brother Jeffrey Eisenberg. You can find them at BryanEisenberg.com.
June 5, 2013
1:00pm ET / 10:00am PT
June 20, 2013
1:00pm ET / 10:00am PT