Defining “good” traffic should be pretty obvious – and of course publishers want the traffic they generate to be legitimate, just as marketers want to purchase legitimate traffic. Good traffic, after all, comes from real human beings visiting real sites, consuming real content, and taking real actions. But what does it mean when we say traffic is not good? It’s an important question for us in the ad-tech industry.
How well we understand the nuances is important if we are to filter the bad from the good. Less bad traffic means better economics for good traffic. In other words, as we expunge the “bad” we get fewer and higher quality “good,” which means more revenue potential per page.
In order to get to more of the good we have to “off-ramp” the bad.
Off-Ramp 1: Websites That Steal Content From the Rightful Owner.
A gateway drug to outright fraudulent behavior is content theft. Bad actors need content to populate their sites and they won’t or don’t create it themselves. It’s far easier to simply steal it. Remember, fraudsters are criminals. Therefore, the first off-ramp is content which is misappropriated. While this isn’t what most people might think of as “traffic fraud,” it still warrants pointing out as definitely not good.
To eliminate these sorts of websites, it takes a combination of human judgment, some manual work, and the leverage of third-party technologies. We need better automated solutions. The ability to find multiple instances of text is a few Web searches away, but plagiarized images, audio files, and videos are much more difficult to find but still solvable thanks to digital fingerprinting. This work isn’t easy and at times can be tedious – but it’s work that needs doing nonetheless.
Off-Ramp 2: Non-Human Actions.
This is the core of what most people refer to when they talk about fraud in online advertising. Simply put, this is any action that masquerades as human activity in order to trick advertisers into buying or valuing an impression. Non-human actions can include: fake pageviews, clicks, mouse-overs, video plays, putting things in shopping carts, filling out Web forms, scrolling down a page, and so on. Non-human activities can come from automated robots living either in server farms or on malware infected PCs and both are designed with some degree of sophistication to mimic real human behaviors.
I also include the act of “spoofing” a cookie. This is the non-human act of creating or manipulating a desired audience profile in order to dupe the decisions a marketer makes about the value of a particular reader. Cookie profiles are a summary of data collected on an individual reader – and therefore represent a key piece of the value judgment marketers make when deciding where, when, and how much to pay for an ad impression; smart bad guys know this.
Discovering and blocking these non-human types of fraud is an ongoing battle between very creative and technically sophisticated people on either side of the equation. Collaboration among white-hat hackers and the leverage of fingerprinting, automated blacklisting, and other proprietary real-time behavior-sensing techniques are how this is most often combatted.
Off-Ramp 3: Low or No Viewability.
A lot of debate centers on the definition of “viewability.” Basically, it’s the likelihood a message will be seen by a real person. Simply put: If there is little-to-no chance of something being seen, then its clearly not “good” traffic. If that low viewability is intentional, then it should be considered stealing.
Measuring viewability is tricky and requires some forensic analysis of the page. It’s not as simple as above or below the fold. Some of this analysis is done post-delivery of the page and message. The reactive nature of this analysis simply means that on the second page load, a clear determination can be made to score the likelihood of a human being able to view it. Several vendors specialize in this sort of analysis but I’d like to see the browser companies (Apple/Safari, Google/Chrome, Microsoft/Explorer, Mozilla/Firefox) step up and put a fork in this issue once and for all. They can “see” top-down as to what gets viewed and this ought to be a signal the browsers make available to the advertising world.
Off-Ramp 4: Obfuscating the Page URL.
A key signal in placing any ad is where or in what context the ad is going to show up. With the rise of audience buying, this might be less important to some advertisers but I would submit that ALL advertisers deserve to know where their ads are landing. When a page URL is either unknown, or worse, intentionally changed or obfuscated, it at a minimum breaches trust and can go as far as being outright fraud. Simply put: A bad actor knows that a known URL is a measure of trust – so why not make sure all your bad URLs are “laundered” so they appear to be good?
Knowing and passing the URL to an advertiser or their agent ought to be a requirement in online advertising. There is no fundamental technical reason that the URL gets dropped from the information transfer. Sometimes it’s hard to determine or a well-meaning ad server technology messes it up inadvertently, but those aren’t valid excuses, just excuses.
Each “off ramp” above is clearly addressable. There is no reason the industry at large can’t apply a set of best practices to each category and stamp out the bad traffic in favor of the good. If we all do this, then the results benefit everyone. Good publishers get better economics. Marketers get higher quality and better performing investments. And bad guys get the shaft.
Image via Shutterstock.
The growth of adblocker usage is one of the major problems affecting publishers today, as it has the potential to cut into ... read more
Marketers have their work cut out for them as consumers globally continue to employ ad blockers in their defence against online advertising, a report from HubSpot shows.
Video marketing has been on the rise for several years now and it is only expected to grow even more. According to ... read more