Home  › Marketing › Strategies

5 Strategies for Improved A/B Testing

  |  August 9, 2011   |  Comments

Don't miss these opportunities to improve revenue or alter user behavior.

In a metrics-driven culture, product developers and marketers often become too reliant on A/B testing. This clogs the test pipeline with non-needle moving initiatives, unnecessary validations of well-known design/product principles, or ISO-testing features that couldn't be launched without other bundled features. In essence, companies over-test, test incorrectly, and don't look for opportunities to streamline the testing process.

During my 10+ years working at some of the world's top search companies, I conducted many A/B tests both on the user interface (UI) as well as back-end features. Given the usage and revenue associated with a single search page, we were careful to test almost everything. We recognized there were many small changes that could alter user behavior or revenue significantly that were counter-intuitive. One surprising example was changing the bullets in front of the sponsored links from square to round. This minor change impacted click-through rate on the search ads significantly, leading to a multi-million dollar change in search revenue. Through my years, I've learned several strategies making the test process speedier, more streamlined, and better optimized to increase revenue.

1. Be aggressive with the first test to gauge the level of potential lift.

With the first test, ensure users will see the change with an aggressive version or call out of the feature. This will quickly determine whether the change will significantly move the metric that you are aiming to impact. If an extremely large or highlighted version does not move the needle, a tamer implementation will more than likely not. For instance, strip a lead form nearly bare rather than see if one form field makes a difference.

2. Don't use A/B testing in place of design best practices or product sense.

Those in charge of product development can become too reliant on A/B tests: they allow their product sensibility to take a back seat and allow test results to make every decision for them. This results in product people testing every possible variation and clogging up the test pipeline. An example was when we were integrating image results on the search page. A product manager wanted to test four or five variations of the same test, using up many of our test slots for that test cycle. Some of the variations that were proposed for testing were three large images versus five smaller images versus four medium images. General design principles tell us that people like to view items in groupings of three or five and our product intuition tells us that people like larger images rather than smaller images.

3. Test your competitors' user interface changes to understand their impact, especially if they have a metrics or big A/B testing culture.

Your competitors are testing new features and UI changes all the time. It is a best practice from a competitive intelligence and a product understanding perspective to test these changes and differences between your site's UI and the competitions. This will uncover metrics changes that may be caused by these different implementations. Of course, this assumes your competitors are competent and are using a reliable A/B testing framework.

4. Use isolation testing on individual features but also try some big jumps and combinations.

It is a best practice to try isolation (ISO) testing; that is testing individual features in isolation. But oftentimes when three features are bundled, the features are first ISO tested and then combo tested. If the features are supposed to be bundled together, try a combo first or test the ISO and the combo at the same time. This saves a lot of time and test slots.

5. Test identical buckets against each other to gauge significance and see if your systems are working correctly.

Often times, as product developers, we wonder if our bucket results are moving our product in the right direction or if it's sending us a strong enough signal to validate whether the product change is positive or not. How do we know if something's wrong with our test system? It's best practice to occasionally run identical buckets against each other and look at the variances between the two to see if there are issues or see if there is significant variance between identical buckets due to sample size or other factors.

While A/B testing and data analysis should be a fundamental part of your product development process for Internet products and services, we all need to step back and think how we can speed up our product development and reduce the bottlenecks in launching new products and feature sets. The number of A/B tests that can run concurrently on a website is constrained by:

  • The number of statistically significant tests that can be run with a particular site's audience size.
  • Data collection and analysis work involved.
  • Potential disruption in user experience.

It's essential these limited test slots are used in the most efficient manner and these common pitfalls are avoided.

ClickZ Live Chicago Learn Digital Marketing Insights From Leading Brands!
ClickZ Live Chicago (Nov 3-6) will deliver over 50 sessions across 10 individual tracks, including Data-Driven Marketing, Social, Mobile, Display, Search and Email. Check out the full agenda, or register and attend one of the best ClickZ events yet!

ABOUT THE AUTHOR

Tim Mayer

Tim started working for Trada in January 2011 as chief strategy officer, leading product management, engineering, marketing, and sales. Tim left Yahoo in August 2010 after leading a cross company initiative to grow search query volume by 210 basis points in six months after 13 consecutive months of query decline by Yahoo Search. This was accomplished by distributing search across the Yahoo network as well as on partner sites.

Tim was previously leading North American Audience at Yahoo responsible for the programming of the Yahoo.com home page as well as leadership of the listings properties (Yahoo! Shopping, Travel, Personals, Local, Real Estate, Autos, Health and Tech).

Prior to the Audience role, Tim was responsible for leading the Yahoo Search Business including monetizing the search page as well as leading initiatives to drive additional search traffic from on network and off network sources. Tim also led several of the Yahoo Commerce Properties.

Prior to this business role, Tim was leading the product direction of Yahoo! Search from a technology and consumer experience perspective. Tim also had the additional responsibilities of leading the Social Search products for a year including Delicious and Answers.

Tim brings over 15 years of search experience to Trada with previous roles at Yahoo, Overture, FAST Search & Transfer, Inktomi, and Thomson Corp. Mayer received an MBA from Babson College, Wellesley, MA, and a BA in English literature from Hobart College, Geneva, NY.

COMMENTSCommenting policy

comments powered by Disqus

Get the ClickZ Marketing newsletter delivered to you. Subscribe today!

COMMENTS

UPCOMING EVENTS

UPCOMING TRAINING

Featured White Papers

Google My Business Listings Demystified

Google My Business Listings Demystified
To help brands control how they appear online, Google has developed a new offering: Google My Business Locations. This whitepaper helps marketers understand how to use this powerful new tool.

5 Ways to Personalize Beyond the Subject Line

5 Ways to Personalize Beyond the Subject Line
82 percent of shoppers say they would buy more items from a brand if the emails they sent were more personalized. This white paper offer five tactics that will personalize your email beyond the subject line and drive real business growth.

WEBINARS

    Information currently unavailable

Resources

Jobs