You can measure the real success of user-generated content with A/B testing and before-and-after comparisons.
You read user-generated content, and so do your customers. They trust it more than anything a marketer can say, and it's this trust that leads them to a confident buying decision. So when you put user-generated content on your site, how do you measure the impact? How can we tell if this content is really helping the business?
Here are two ways we recommend organizations measure real success.
Do Before-and-After Comparisons
It's possible to compare key metrics - such as average order value, sales conversion, and traffic to the product page, among others - before and after user-generated content is added to the site. The key is to look at the same (or very similar) products at specific points in time over a broad period and avoid any external factors (such as promotions).
Here's an example of a before-and-after test of reviews on a classic kitchen mixer that sees predictable sales throughout the year. Key metrics would be captured for a full year before reviews were launched. Once reviews were launched, the metrics would be compared quarterly, with a full comparative analysis done at specific points in time one year after launch.
As a best practice, compare metrics using "percent of change" rather than a simple difference, so you can compare the performance of different products to one another and make additional observations. For example, using percent of change, you may see a correlation between the number of reviews and overall conversion. Every marketer should know the percent change formula, but for the sake of convenience, here it is: % of change = [(new value - old value)/old value]*100.
To check the accuracy with this method, you can compare that mixer's results with a mixer in the same category that doesn't have reviews for the same period, or you can compare it to other products in the same category. And compare it further to products in similar categories and the site overall. With a few comparison points, assuming you have a large lift for the mixer, you can confidently support the results.
There's always the question of external factors in a before-and-after analysis, but if the results are substantial, this method can work well on a product-by-product basis on established products that have sold consistently over time. It's important to take seasonality and other factors into account, too. For example, recent nationwide economic factors played a role in many sales downturns.
Run an A/B Test
While an A/B test can take time and effort to plan, it's one of the most accurate ways to measure success, when done correctly. A/B testing takes two otherwise identical groups and makes one change to one group. That group, version "A," becomes the test group, and version "B," the group without the change, becomes the control group. You then measure the performance of these two groups, and determine what impact the variable made on the results.
For example, if you wanted to test the success of an e-mail campaign, you could do so with A/B testing. You would first split your receipt list into two groups. The control group (Group A) receives a standard e-mail, and the test group (Group B) receives the same e-mail, but with the addition of review content. You then measure key performance metrics, which, in this case, might include response rate, click-through rate, and conversion. We have seen lifts in revenue per e-mail as much as 50 percent with this method. We've also seen online retailers do A/B split tests on their websites and show lifts from 10 percent to 50 percent.
Since the only difference between Group A and B is the inclusion of user-generated content, any difference in the performance metrics can be attributed to the presence of that content.
When used correctly, it is powerful, but, if done incorrectly, it can provide misleading results. To run a successful A/B test, take time to plan ahead, base your sample size and time frame to gather enough data to show clear, realistic results, and make sure to just test one variable at a time, so you can confirm that results are based on that one variable.
Other Key Points to Remember
Look for at least a three-month test window to help make sure your results are sound. While some specific campaigns may take less time to gauge results, three months is usually a good rule of thumb.
It's also important to make sure you have the right Web analytics tags and data capture methods in place. For a holistic view of how user-generated content is working for you, tag all interactions your site visitors have with this content - including online and offline encounters. And align your tagging methods with your business' overall success metrics.
Final parting thought: If a data point doesn't matter to the bottom line (or impress the CFO), think about why you're really measuring it.
Revolutionize your digital marketing campaigns at ClickZ Live San Francisco! (August 10-12) Educating marketers for over 15 years, we return to San Francisco this August with our action-packed, educationally-focused agenda that will cover every aspect of digital marketing. Early Bird rates available through Friday, July 17 - save up to $300! Register today.
Sam Decker is founder and CEO of Mass Relevance, the leading enterprise social curation company. He speaks and consults on digital growth strategy, based on years of experience in technology and social markets. He has written two books on word-of-mouth marketing and is an award-winning blogger (www.deckermarketing.com). As former chief marketing officer of Bazaarvoice, the market leader in hosted social commerce applications that drive sales, Sam worked to help brands present the right user-generated content at the right time in the purchase path, bringing real value to the consumer and the business. Prior to Bazaarvoice he drove Dell's customer segmentation, their customer-centricity strategy, and led Dell's consumer website, building Dell.com into the largest consumer e-commerce site at $3.5 billion in annual sales.
US Consumer Device Preference Report
Traditionally desktops have shown to convert better than mobile devices however, 2015 might be a tipping point for mobile conversions! Download this report to find why mobile users are more important then ever.
E-Commerce Customer Lifecycle
Have you ever wondered what factors influence online spending or why shoppers abandon their cart? This data-rich infogram offers actionable insight into creating a more seamless online shopping experience across the multiple devices consumers are using.