Are you measuring just activity levels or real outcomes? A look at some differences between the two.
People who run non-transactional websites have a tough job evaluating the success or return on investment. In contrast, if you sell stuff online or have a clearly defined transactional environment, then it's relatively easy to assess whether things are working or not. You can count the number of transactions or you can use metrics like the conversion ratio to work out how effective the site is in turning opportunity into value. But what about all the cases, like information sites or support sites, where there aren't any clearly defined transactions? What happens then?
A useful approach to measuring the effectiveness of non-transactional websites is taken from models built to evaluate the success of government policies and other intangibles. It differentiates the measurement of performance between measuring "outputs" and measuring "outcomes."
Outputs are activities that happen on the site. These are probably the closest things that you might get to "conversion points" in a non-transactional environment. They are the tangible pieces of activity which are believed to be valuable. If you take the example of a public information website whose purpose is to support a campaign to raise awareness of a certain type of disease and encourage people to get themselves screened or tested, then the measureable outputs of the website could be things like:
Virtually all of these outputs could be measured using a properly configured Web analytics system, but what the Web analytics system can't tell you is what the result of the visit to the website ended up being and whether the website was successful in its objective of getting potentially vulnerable people to get themselves tested. This is why it's also important to measure outcomes.
The outcomes are what happen as a result of the visit to the website. Often, non-transactional websites are trying to influence subsequent attitudes and behaviors and these are difficult to measure at the best of times and are impossible to measure using on-site behavioral data such as Web analytics systems.
In my example of the public information website above, the desired outcome is that potentially vulnerable people read the relevant information on the website and then make an informed decision to put themselves forward for screening. The approach in measuring this could be two-fold:
The methodology in both approaches is likely to revolve around some kind of survey or other user feedback mechanisms. In the first instance, you could run an exit survey and ask a series of questions about whether the user thought that the information was useful and also what they were likely to do next as a result of their visit. One measure of success could be the proportion of people who said that they were more likely to go to see a doctor to discuss their condition. One problem, though, might be that some people would end up going to a doctor unnecessarily!
The second approach could be to try and understand what influence the website had in a person's decision to go for screening at the time they are actually being screened. This is possibly more subjective, but it also can take into account other influences that may have happened along the way and give some insight into the context in which those types of decisions are made. The true test would be to understand what number or proportion of people who went for screening would have done if had they not been able to access the relevant information over the Web.
At a time when many organizations are trying to shift information provision or customer contact from other channels onto the Web, it's important to differentiate between measuring just activity levels and real outcomes. Measuring outcomes is hard to do and often the outcomes will manifest themselves in places other than the website, such as in the call center. It therefore requires a more holistic approach to measurement than what comes out the end of a Web analytics system.
Neil Mason is SVP, Customer Engagement at iJento. He is responsible for providing iJento clients with the most valuable customer insights and business benefits from iJento's digital and multichannel customer intelligence solutions.
Neil has been at the forefront of marketing analytics for over 25 years. Prior to joining iJento, Neil was Consultancy Director at Foviance, the UK's leading user experience and analytics consultancy, heading up the user experience design, research, and digital analytics practices. For the last 12 years Neil has worked predominantly in digital channels both as a marketer and as a consultant, combining a strong blend of commercial and technical understanding in the application of consumer insight to help major brands improve digital marketing performance. During this time he also served as a Director of the Web Analytics Association (DAA) for two years and currently serves as a Director Emeritus of the DAA. Neil is also a frequent speaker at conferences and events.
Neil's expertise ranges from advanced analytical techniques such as segmentation, predictive analytics, and modelling through to quantitative and qualitative customer research. Neil has a BA in Engineering from Cambridge University and an MBA and a postgraduate diploma in business and economic forecasting.
US Consumer Device Preference Report
Traditionally desktops have shown to convert better than mobile devices however, 2015 might be a tipping point for mobile conversions! Download this report to find why mobile users are more important then ever.
E-Commerce Customer Lifecycle
Have you ever wondered what factors influence online spending or why shoppers abandon their cart? This data-rich infogram offers actionable insight into creating a more seamless online shopping experience across the multiple devices consumers are using.
September 9, 2015
12pm ET/9am PT
September 16, 2015
12pm ET/9am PT
September 23, 2015
12pm ET/ 9am PT