Analytics Basics: Understanding Survey Data

  |  June 21, 2010   |  Comments

When running voice-of-customer surveys, marketers should be aware of these potential biases in survey results.

Having looked at some fundamentals around Web analytics metrics in recent weeks, this time let's turn our attention to survey-based data and metrics. Over the past couple of years, there's been a growth in the number of organizations running on-site survey-based voice-of-the-customer programs. These might range from simple do-it-yourself approaches using free or low-cost survey software, to a packaged tool like 4Q, or more sophisticated programs like ForeSee Results or iPerceptions. As with all things, you pay your money and you make your choices.

This growth in the use of survey data and other customer insight tools is great to see. I always say that Web analytics can tell you what happened and when, but rarely tell you who or why. That's where tools like surveys come in. They give you a different perspective to what's happening or not happening. But like with your Web analytics data, it's important to understand the fundamentals of where the data comes from and what that means in terms of how to use it and interpret it.

Most online customer insight is captured on the site; either through a site-intercept survey or using a page level feedback mechanism. Site-intercept surveys usually offer an invitation to a sample of the site's visitors to take part in the survey and then the survey is completed at the end of the visit. So, you generally have no control over who or which type of people are offered the invitation to take part in the survey and you have no control over who actually completes the survey. This means that the survey data will generally have a bias in it, but you also don't necessarily know what that bias is unless you have another source of data to compare it with.

If you find from your survey that 40 percent of the respondents are male and 60 percent of the respondents are female, that doesn't mean that 60 percent of your site's visitors are female. What it means is that 60 percent of the people who answered your survey claimed to be female. It's possible that in this instance, the true proportion of women visiting your site is actually nearer 50 percent, but that women had a higher propensity to answer the survey than men did, making it look like there were more women than men. Unless you have another source of data, like from an audience measurement panel, then it's going to be difficult to know whether the profile of people in your survey is representative of your website's visitors or not.

Typically, there are some general biases that you might see in site-intercept surveys. One that we often see is that people who generally know you are more likely to respond to your survey than people who don't. This can manifest itself in many ways. They're more likely to have transacted or interacted with you, they're more likely to be customers, and they're more likely to be the more frequent users of your website. Quite often you might ask a question about how often the respondent has visited the website in the past. If you look at the survey data and compare it to your Web analytics data (and there are issues over both sets of data), then you'll find that you have a greater proportion of people who are repeat visitors to the website in your survey data than is recorded in your Web analytics data. There are also usually demographic biases as well. Men are generally less likely to answer surveys (though there are expectations depending on the subject matter) and generally the younger generations are harder to get feedback from than the older generations. So, your respondent sample is often underrepresented by young male. If this is a core audience for you to understand, then this is something you need to be aware of.

So, with all these potential issues, does that mean survey data is rubbish as people often claim, particularly when they don't like the results? Well, no it doesn't. But it does mean that you need to treat it carefully, to be aware of some of the potential biases that may exist, and what the impact of those biases is on the metrics that you are reporting. In general, to take aggregated survey results at a point in time at face value can be a bit dangerous. You must find approaches to overcome biases and you must understand the impact of these biases on key metrics that we use survey data to report on such as customer satisfaction and Net Promoter Scores. This is what I'll be looking at next time. Till then...

ClickZ Live Chicago Join the Industry's Leading eCommerce & Direct Marketing Experts in Chicago
ClickZ Live Chicago (Nov 3-6) will deliver over 50 sessions across 4 days and 10 individual tracks, including Data-Driven Marketing, Social, Mobile, Display, Search and Email. Check out the full agenda and register by Friday, Sept 5 to take advantage of Super Saver Rates!

ABOUT THE AUTHOR

Neil Mason

Neil Mason is SVP, Customer Engagement at iJento. He is responsible for providing iJento clients with the most valuable customer insights and business benefits from iJento's digital and multichannel customer intelligence solutions.

Neil has been at the forefront of marketing analytics for over 25 years. Prior to joining iJento, Neil was Consultancy Director at Foviance, the UK's leading user experience and analytics consultancy, heading up the user experience design, research, and digital analytics practices. For the last 12 years Neil has worked predominantly in digital channels both as a marketer and as a consultant, combining a strong blend of commercial and technical understanding in the application of consumer insight to help major brands improve digital marketing performance. During this time he also served as a Director of the Web Analytics Association (DAA) for two years and currently serves as a Director Emeritus of the DAA. Neil is also a frequent speaker at conferences and events.

Neil's expertise ranges from advanced analytical techniques such as segmentation, predictive analytics, and modelling through to quantitative and qualitative customer research. Neil has a BA in Engineering from Cambridge University and an MBA and a postgraduate diploma in business and economic forecasting.

COMMENTSCommenting policy

comments powered by Disqus

Get the ClickZ Analytics newsletter delivered to you. Subscribe today!

COMMENTS

UPCOMING EVENTS

Featured White Papers

IBM: Social Analytics - The Science Behind Social Media Marketing

IBM Social Analytics: The Science Behind Social Media Marketing
80% of internet users say they prefer to connect with brands via Facebook. 65% of social media users say they use it to learn more about brands, products and services. Learn about how to find more about customers' attitudes, preferences and buying habits from what they say on social media channels.

Marin Software: The Multiplier Effect of Integrating Search & Social Advertising

The Multiplier Effect of Integrating Search & Social Advertising
Latest research reveals 68% higher revenue per conversion for marketers who integrate their search & social advertising. In addition to the research results, this whitepaper also outlines 5 strategies and 15 tactics you can use to better integrate your search and social campaigns.

Resources

Jobs

    • Senior Director US Agency Ad Sales
      Senior Director US Agency Ad Sales (Expedia, Inc.) - ChicagoJob Title:  Senior Director US Agency Ad Sales   Position Overview: The Senior...
    • Senior Director US Agency Ad Sales
      Senior Director US Agency Ad Sales (Expedia, Inc.) - New YorkPosition Overview: The Senior Director US Agency Ad Sales is responsible for managing...
    • Digital Marketing Analyst
      Digital Marketing Analyst (GovLoop) - Washington D.C.Are you passionate about audience acquisition? Love effective copy and amazingly effective...