In my last column, I looked at some of the characteristics of data collected from surveys - particularly surveys run on websites where you have no control over who is answering. Generally, this lack of control can cause some bias in the data, which can lead to issues if you're looking at the aggregated reports.
For example, the data on the profile of visitors (e.g., gender, age) that you collected from survey data may not actually reflect the true profile of visitors to your site, because of the different propensities of different groups to respond to surveys. So, does that mean that survey data is useless? Not really, but it does mean that it needs to be handled with a bit of caution.
One way to reduce potential biases in the data is to trend the results over time. Survey data is most useful when you have it running continuously, because you have a constant monitor of the health of the site and you can refer to the findings to assess the effects of marketing and product development activity. Having a continuous dataset also helps to reduce some of the bias.
Say, for example, that your survey shows that the age profile of visitors to your website is 40 percent under 35 and 60 percent over 35. We know that generally younger people are less responsive to surveys than older people, so we might suspect that there's a bias in the data toward older people. If, however, six months later you look at the data and it shows that the profile has changed and that it's now 60 percent under 35 and 40 percent over 35, then we can be reasonably confident that there's been a change in the profile over time and that the profile has become older. We could also check whether the change had been statistically significant or not.
Another way of reducing bias is to segment your data. Actually, I'd say that you absolutely have to segment your data to make it useful and to understand it properly.
So, while I may not be confident that the profile data is properly representative of the reality, I can still use the profile data to look for differences in some of my key metrics, such as customer satisfaction or the Net Promoter Score. I can compare satisfaction cores among the younger age groups and the older age groups to see if there are any significant differences, and because there often are, I should always be looking at these key metrics among key segments of the site's visitors. This is because changes in the visitor profile of the site can have a significant impact on the changes in these key metrics. Let me give you an example.
As I mentioned last time, you can see differences in metrics like satisfaction score or Net Promoter Score among different segments, depending on their familiarity with the site or the brand. Often, people who are visiting your website for the first time will have lower scores for satisfaction and Net Promoter Score than those who have visited before.
Let's assume that you've been running some campaigns either online or offline and have driven a significant amount of new traffic to the site. The survey you're running on the site will probably reflect the increase in new visitors, and as a result, it's possible that the overall satisfaction score will go down. This isn't because people are overall less satisfied with the site experience, but because you have a greater proportion of people answering the survey (e.g., first-time visitors) who generally tend to give lower scores.
Nothing may have actually changed in the site experience itself - the only change has been in the mix of visitors to the site. In fact, the satisfaction among first-time visitors may have stayed the same, and the satisfaction among repeat visitors might also have stayed the same, but overall satisfaction can appear to have gone down.
At the surface, online survey-based data might seem unreliable and rife with issues. However, understanding the source of these issues and interpreting the data wisely can ensure that you get real value from this rich source of customer insight. And remember: segment, segment, segment!
Meet Your Favorite ClickZ Contributors
Many of ClickZ's leading expert contributors will be at ClickZ Live, the new online and digital marketing event kicking off in New York (March 31-April 3). Hear from the likes of: Jeremy Hull, Lisa Raehsler, Andrew Goodman, Bryan Eisenberg, Mathew Sweezey, Aaron Kahlow, Stephanie Miller, Simms Jenkins, Jeanne S. Jennings, Dave Hendricks and more!
Neil Mason is SVP, Customer Engagement at iJento. He is responsible for providing iJento clients with the most valuable customer insights and business benefits from iJento's digital and multichannel customer intelligence solutions.
Neil has been at the forefront of marketing analytics for over 25 years. Prior to joining iJento, Neil was Consultancy Director at Foviance, the UK's leading user experience and analytics consultancy, heading up the user experience design, research, and digital analytics practices. For the last 12 years Neil has worked predominantly in digital channels both as a marketer and as a consultant, combining a strong blend of commercial and technical understanding in the application of consumer insight to help major brands improve digital marketing performance. During this time he also served as a Director of the Web Analytics Association (DAA) for two years and currently serves as a Director Emeritus of the DAA. Neil is also a frequent speaker at conferences and events.
Neil's expertise ranges from advanced analytical techniques such as segmentation, predictive analytics, and modelling through to quantitative and qualitative customer research. Neil has a BA in Engineering from Cambridge University and an MBA and a postgraduate diploma in business and economic forecasting.
March 19, 2014