In my last column, I looked at some challenges around understanding the dynamics of marketing activity when using the standard attribution models provided in most Web analytics systems. Most of the time, Web analytics systems use a “last click” attribution model, which credits the last marketing touch point with the sale or conversion.
This can give a highly misleading view on the role that different channels play in the awareness building and consideration phases of the purchase decision making process. One approach to overcoming this challenge is to view the various different attribution models that are found in some Web analytics systems to understand the role that different channels play. An alternative approach is to use data feeds to extract the data from your Web analytics systems into a database and to analyze the data there.
The latter approach is more complicated and expensive, but more organizations are looking at the benefit of having the ability to manipulate their Web data at will, and not to be constrained by the artificial analysis conventions inherent within most of the popular Web analytics systems. Extracting the data using data feeds or APIs (define) also has a number of other benefits. It allows you to integrate your data more easily with other data sources, such as internal customer data or customer feedback data, from other systems such as surveys. It also allows you to create different views of your marketing activity, which may be more suited to your business and help you to get better insight into how different aspects of your marketing campaigns are really working. One example is understanding search activity in more detail.
Typically, in a Web analytics tool, it’s possible to spilt visits that come from search engines into the two main types — “paid search” that comes from sponsored links such as AdWords, and “organic search,” where the visit has originated from the natural search engine results. This assumes that you have correctly “tagged” all your paid search campaigns so that they don’t get confused with the natural search visits. Again, typically, you will then be able to report on the number of visits, etc., that come from the different types of search activity and the keywords used in the search engine itself to reach the site.
You may also be able to split that down into individual campaigns or a group of keywords for your paid search activity, depending on how you have set up those campaigns and the level of detail that you have gone to in your tracking approach. However, when it comes to natural search activity, specifically, you’re typically left with reports that show either the amount of traffic from natural search in its entirety or from every single keyword that someone has used. As a result, it can often be difficult to “see the wood for the trees.”
What’s needed is a way of being able to classify keywords into groups that gives a better picture of how visitors are using search to reach the site and the potential outcomes. This is about aggregating data to get rid of the noise and to see the general patterns more clearly. The approach here is to assign keywords to different groups based on their content and then to look at the data in the context of those groups, rather than the individual keywords themselves.
Let’s look at an example based on the financial services company I used last week. Let’s assume that the company MySite sells insurance policies online for different products such as car insurance, home insurance, and so on. Typically, looking at the search keywords reports in a Web analytics tool will show you that people reach the site through hundreds of different keywords. The problem is that it’s difficult to see any patterns in the data over time and to assess the effectiveness of different search strategies. For example, should more emphasis be placed on improving the results from search terms that talk about specific products? How much traffic from search engines is navigational as opposed to a true search?
By extracting the relevant data or reports from the Web analytics system, it’s possible to be able to manipulate the data in different ways. For example, you can classify keywords into different groups by looking for specific strings of text in the keyword. So, any term that has the string “MySite” in it can be classified as branded search, anything that has “car” or “car insurance” in it can be classified as a product search, and so on.
Once the groupings have been created, they can be further manipulated to create new groups or concepts such as “branded product search,” “brand only search,” and “unbranded search.” The groupings can be as complex or as granular as you need them to be, but they also need to be managed and maintained to incorporate and include new search terms as they appear in the data. However, with a bit of work and by using the ability of other tools and technologies (such as text recognition) on data extracted from the Web analytics or campaign systems, it’s possible to generate insight that would otherwise be difficult to uncover.
Marketers create personas to better understand their target audience and what it looks like. If marketers can understand potential buyer behaviors, and where they spend their time online, then content can be targeted more effectively.
What’s behind a successful data-driven marketing strategy?
One of the major challenges in the martech industry is getting the attention of prospects in a world where they are bombarded by content and emails on all sides.
Facebook is addressing one of the biggest missing pieces of its chatbot offering: analytics.