It was one of those moments. I was working on a client’s data and began to suspect something was wrong. Not with the client’s business, but with the data itself. The potential business implications were significant. As I dug deeper and deeper into the issue, I got that sinking feeling. Something was seriously wrong.
The business in question was looking to aggressively improve its digital channel’s effectiveness, focusing on conversion optimization as traffic levels were quite buoyant. It had implemented a satisfaction tracking survey to understand visitor intent and satisfaction. It also had commissioned usability testing to understand the user experience in more detail and had started a testing and experimentation program. It was all the right stuff.
But there wasn’t any strong evidence that conversion was actually improving, so the business needed a deeper dive into the data to find out what was happening. That’s when the problem emerged. Without going into the gruesome details, I discovered the business’s conversion rate was being underestimated and that the degree of underestimation had been getting worse over time. The historical data from the Web analytics tool was wrong on some key metrics; however, the conversion rate was better than previously thought.
The really bad news was that the business had probably been focusing on the wrong problem. While all the activity on conversion optimization was good stuff, the revised data highlighted that other issues may have been more pressing. Worse, the data’s credibility was seriously undermined and, to some extent, the team’s credibility was compromised as well. For conversion optimization, it was taking one step forward and two steps back.
We marketers must get the right numbers right and keep them right. When it comes to marketing optimization, good quality data is a core component. Getting good quality data that allows better decision making is a key step on the journey. That might seem like an obvious statement, but it’s not a process that should be underestimated, nor is it a one-off set-up event.
When a new system is implemented, there’s inevitably a focus on the data it’s generating. That information is then reconciled against other data sources. That’s great, but those reviews must be repeated at regular intervals to ensure data integrity remains high. If this isn’t a managed, ongoing process, the data’s integrity may decline over time until something happens that causes the data to be questioned. Then it might be too late.
Managing data integrity is a messy job, but someone has to do it. Good processes will certainly help ensure that all pages get tagged, campaigns are tracked properly, and so on. Technology is available that can help check for tags on the site and other tag management challenges. One must also have a keen eye to look at the data for trends and patterns that may not be a true reflection of what’s going on. I think this is a skill that can be learned.
A good marketing analyst can sense when something doesn’t look right. In my experience, if something looks odd, it probably is odd and isn’t real behavior. Sudden changes in trends, steps in the data, spikes, and dips are all potentially symptomatic of artificial impacts on data. If they can’t be explained by real-world events, it’s worth digging into the data to see if there’s anything untoward happening, like changes to the tool’s configuration, new site monitoring tools being put in place, changes to the hosting environment, and so on.
Getting good data integrity is not a one-off event. It’s an ongoing process. Be wary of the potential impact that changes to your site or tracking environment will have on your data and plan accordingly. Take time to reconcile your data on a regular basis to see if there are any divergent trends. With these basic processes in place, you might avoid that sinking feeling at some point in the future.
Today’s column originally ran on June 24, 2008.