Instead of trying to test endless variations of minutiae, look for the big ideas that impact customer experience and buying process.
Is yours the typical company conducting two to five tests a month, struggling to eek out more from your optimization program, and wasting critical resources of people and website traffic? That's the result of not focusing in on the big idea!
How much should you be testing?
A mid-size company can easily handle 30 to 50 tests a month. The reason most companies never get there is because they waste so many cycles on what I call "slice and dice" optimization. Let's consider the following test, which I found "in the wild" - and which I find ironic because it is for a service offering a marketplace of landing page designers.
Depending on how you want to define your variables, these two pages have around a dozen changes. I hope this is not what you want from your landing page designers. You don't need someone to create endless variations of every variable.
Can you identify all of the variables being proposed for testing? (I'll share with you my list next time.)
For now, let's assume that for each of these variables you test just two variations even if more may be warranted. I'll show you what the problem with that approach is. (Note: I'll make use of Google's Website Optimizer "Test Duration Calculator" to estimate the numbers, but you could easily do this by hand or with a calculator or simple spreadsheet.):
I don't know the true stats for this page, but they don't really matter in order to illustrate the challenge.
Let's assume the following:
That means it takes more than 108 days - over three months! - to complete this test. That's a whole-heck-of-a-lot of visitors and a whole-heck-of-a-lot of time consumed to get one test completed.
The way we teach testing, there are probably three variables worth testing (variables that communicate to a visitor) on this page. Let's assume the same two variations for each, though to be frank, one of the variables I would want at least three or four variations if we were doing this test for an actual client. But for the sake of simplicity, let's keep everything the same.
So now we would have:
This test would be over in just under 18 days, a scant two and a half weeks.
Which way seems more efficient?
Should you test for variables that seem to really matter to visitors vs. testing virtually random variations of elements in the hope something gives you a little lift? You may achieve some gains - that's why this practice is so common - but you'll burn out waiting for the results. This is why so many optimization efforts fizzle out over time.
Next time I will share the variables I found and discuss the three elements I would test on this current page.
In order to give you a leg up on identifying the variables on your own, I'll give you a question as a framework. When was the last time you looked at a page and said to yourself, "The layout is horizontal and not vertical, so gosh darn it, I can't buy from this page"? Vertical vs. horizontal layout could matter as a display of information issue if you are trying to change a lot of what is above and below the fold. But that really wasn't the case in this example. It's just a waste of time and effort unless you have no real idea what will move the needle for customers - and in that case, any test is better than nothing. Maybe.
Instead of trying to test endless variations of minutiae, we teach companies to look for the big ideas that impact customer experience and buying process. The smaller variations we can always come back to after the big ideas establish directionality. Can you find the big ideas being tested (or that should be tested) in this example?
Revolutionize your digital marketing campaigns at ClickZ Live San Francisco (August 10-12)!
Educating marketers for over 15 years, our action-packed, educationally-focused agenda offers 9 tracks to cover every aspect of digital marketing. Join over 500 digital marketers and expert speakers from leading brands. Register today!
Bryan Eisenberg is co-founder and chief marketing officer (CMO) of IdealSpot. He is co-author of the Wall Street Journal, Amazon, BusinessWeek, and New York Times best-selling books Call to Action, Waiting For Your Cat to Bark?, and Always Be Testing, and Buyer Legends. Bryan is a keynote speaker and has keynoted conferences globally such as Gultaggen, Shop.org, Direct Marketing Association, MarketingSherpa, Econsultancy, Webcom, the Canadian Marketing Association, and others for the past 10 years. Bryan was named a winner of the Marketing Edge's Rising Stars Awards, recognized by eConsultancy members as one of the top 10 User Experience Gurus, selected as one of the inaugural iMedia Top 25 Marketers, and has been recognized as most influential in PPC, Social Selling, OmniChannel Retail. Bryan serves as an advisory board member of several venture capital backed companies such as Sightly, UserTesting, Monetate, ChatID, Nomi, and BazaarVoice. He works with his co-author and brother Jeffrey Eisenberg. You can find them at BryanEisenberg.com.
US Consumer Device Preference Report
Traditionally desktops have shown to convert better than mobile devices however, 2015 might be a tipping point for mobile conversions! Download this report to find why mobile users are more important then ever.
E-Commerce Customer Lifecycle
Have you ever wondered what factors influence online spending or why shoppers abandon their cart? This data-rich infogram offers actionable insight into creating a more seamless online shopping experience across the multiple devices consumers are using.