Don't miss these opportunities to improve revenue or alter user behavior.
In a metrics-driven culture, product developers and marketers often become too reliant on A/B testing. This clogs the test pipeline with non-needle moving initiatives, unnecessary validations of well-known design/product principles, or ISO-testing features that couldn't be launched without other bundled features. In essence, companies over-test, test incorrectly, and don't look for opportunities to streamline the testing process.
During my 10+ years working at some of the world's top search companies, I conducted many A/B tests both on the user interface (UI) as well as back-end features. Given the usage and revenue associated with a single search page, we were careful to test almost everything. We recognized there were many small changes that could alter user behavior or revenue significantly that were counter-intuitive. One surprising example was changing the bullets in front of the sponsored links from square to round. This minor change impacted click-through rate on the search ads significantly, leading to a multi-million dollar change in search revenue. Through my years, I've learned several strategies making the test process speedier, more streamlined, and better optimized to increase revenue.
1. Be aggressive with the first test to gauge the level of potential lift.
With the first test, ensure users will see the change with an aggressive version or call out of the feature. This will quickly determine whether the change will significantly move the metric that you are aiming to impact. If an extremely large or highlighted version does not move the needle, a tamer implementation will more than likely not. For instance, strip a lead form nearly bare rather than see if one form field makes a difference.
2. Don't use A/B testing in place of design best practices or product sense.
Those in charge of product development can become too reliant on A/B tests: they allow their product sensibility to take a back seat and allow test results to make every decision for them. This results in product people testing every possible variation and clogging up the test pipeline. An example was when we were integrating image results on the search page. A product manager wanted to test four or five variations of the same test, using up many of our test slots for that test cycle. Some of the variations that were proposed for testing were three large images versus five smaller images versus four medium images. General design principles tell us that people like to view items in groupings of three or five and our product intuition tells us that people like larger images rather than smaller images.
3. Test your competitors' user interface changes to understand their impact, especially if they have a metrics or big A/B testing culture.
Your competitors are testing new features and UI changes all the time. It is a best practice from a competitive intelligence and a product understanding perspective to test these changes and differences between your site's UI and the competitions. This will uncover metrics changes that may be caused by these different implementations. Of course, this assumes your competitors are competent and are using a reliable A/B testing framework.
4. Use isolation testing on individual features but also try some big jumps and combinations.
It is a best practice to try isolation (ISO) testing; that is testing individual features in isolation. But oftentimes when three features are bundled, the features are first ISO tested and then combo tested. If the features are supposed to be bundled together, try a combo first or test the ISO and the combo at the same time. This saves a lot of time and test slots.
5. Test identical buckets against each other to gauge significance and see if your systems are working correctly.
Often times, as product developers, we wonder if our bucket results are moving our product in the right direction or if it's sending us a strong enough signal to validate whether the product change is positive or not. How do we know if something's wrong with our test system? It's best practice to occasionally run identical buckets against each other and look at the variances between the two to see if there are issues or see if there is significant variance between identical buckets due to sample size or other factors.
While A/B testing and data analysis should be a fundamental part of your product development process for Internet products and services, we all need to step back and think how we can speed up our product development and reduce the bottlenecks in launching new products and feature sets. The number of A/B tests that can run concurrently on a website is constrained by:
It's essential these limited test slots are used in the most efficient manner and these common pitfalls are avoided.
Join the Industry's Leading eCommerce & Direct Marketing Experts in Chicago
ClickZ Live Chicago (Nov 3-6) will deliver over 50 sessions across 4 days and 10 individual tracks, including Data-Driven Marketing, Social, Mobile, Display, Search and Email. Check out the full agenda and register by Friday, Oct 3 to take advantage of Early Bird Rates!
Tim started working for Trada in January 2011 as chief strategy officer, leading product management, engineering, marketing, and sales. Tim left Yahoo in August 2010 after leading a cross company initiative to grow search query volume by 210 basis points in six months after 13 consecutive months of query decline by Yahoo Search. This was accomplished by distributing search across the Yahoo network as well as on partner sites.
Tim was previously leading North American Audience at Yahoo responsible for the programming of the Yahoo.com home page as well as leadership of the listings properties (Yahoo! Shopping, Travel, Personals, Local, Real Estate, Autos, Health and Tech).
Prior to the Audience role, Tim was responsible for leading the Yahoo Search Business including monetizing the search page as well as leading initiatives to drive additional search traffic from on network and off network sources. Tim also led several of the Yahoo Commerce Properties.
Prior to this business role, Tim was leading the product direction of Yahoo! Search from a technology and consumer experience perspective. Tim also had the additional responsibilities of leading the Social Search products for a year including Delicious and Answers.
Tim brings over 15 years of search experience to Trada with previous roles at Yahoo, Overture, FAST Search & Transfer, Inktomi, and Thomson Corp. Mayer received an MBA from Babson College, Wellesley, MA, and a BA in English literature from Hobart College, Geneva, NY.
IBM Social Analytics: The Science Behind Social Media Marketing
80% of internet users say they prefer to connect with brands via Facebook. 65% of social media users say they use it to learn more about brands, products and services. Learn about how to find more about customers' attitudes, preferences and buying habits from what they say on social media channels.
An Introduction to Marketing Attribution: Selecting the Right Model for Search, Display & Social Advertising
If you're considering implementing a marketing attribution model to measure and optimize your programs, this paper is a great introduction. It also includes real-life tips from marketers who have successfully implemented attribution in their organizations.
October 23, 2014
1:00pm ET/10:00am PT