Managing Large Testing Teams
Observing companies like Della and eBay shows us how to effectively prioritize and manage tests.
Observing companies like Della and eBay shows us how to effectively prioritize and manage tests.
The phrase “Testing, testing, one, two, three” is usually reserved for the lowest levels of tactical production. But with research, shopping, and customer service migrating online, testing has become a major part of running an interactive customer communication program. And customer communication is decidedly interactive.
A/B splits, multivariate testing, landing page optimization, persuasion architecture, and the like continue to improve the online experience. That requires people. People require management.
At several recent events including a Web Analytics Association Symposium in Austin, Dell shared its management structure and processes. With 25 people on the testing team, not to mention the content managers, designers, and user interface professionals who work with them, Dell proactively brought in a process management specialist.
What’s a process management specialist? She spent weeks observing how the testing was done at Dell and mapped out procedures for when things should happen, what the approval cycle should look like, and what sort of reporting was important for which managers.
EBay has more than 100 tests running on its site at any given time and has a team of six who manage the sequencing of those tests to avoid conflict.
With all that experimentation going on, how do you prioritize tests?
At the WAA Symposium, Emily Campbell, Dell’s executive director of global online, explained hierarchy. It starts with strategic priorities. This is where the vision and mission of the company are crucial. If this test will further the organization’s stated purpose and goals, it’s the right thing to do.
Next come the business initiatives. Emily’s team looks to the goals of individual business units for guidance on whether a given test is merely fun, simply interesting, or actually constructive.
The final filter she shared is that testing at the bottom of the funnel is the easiest and has the biggest, immediate impact. The bottom of the funnel is narrow. The tasks are limited. The results are instantly ascertained. The top of the funnel is vague and the results of top-of-funnel tests may take months to determine.
Dell operates under the assumption that clearly identifying testing success metrics are imperative. Otherwise, you won’t know if you’re collecting the right data and you won’t know when you’re done. The company uses a variety of metrics including:
Traffic
Time on page
Sales
Revenue per visit
Revenue per visitor
Margin
Customer satisfaction
If you don’t know what success looks like, you’ll never see it coming.
There are a few other testing and optimization prioritizing principals to keep in mind as you test your way to success. First, determine if there is money on the table. If this test doesn’t impact the bottom line, find one that does. Finally, see where the request for the test is coming from. Should something as definitive and exacting as online testing succumb to the petty petitions of well-placed people? Oh, yes.
As Bob Page, vice president of the data and analytics platform at eBay likes to point out, “All metrics are political.”
If only testing were as easy as one, two, three.