When a brand picks a marketing team, why is CRO always on the bench?
At the end of 2015 I started a Twitter conversation with some of the world’s leading experts on conversion optimisation.
It all centred on the question ‘what is holding conversion optimisation back’ and as expected, it stirred more than a few interesting emotions in the CRO community.
With the frustration emanating from the furious fingers of my comrades, I thought it would be prudent to document these feelings and to somehow get to the bottom of why, despite the clear success stories and crazy ROI numbers, conversion optimisation wasn’t being given the respect it deserves from businesses of all shapes and sizes.
You can find out why 17 leading CRO experts think CRO isn’t being used properly (and what to do about it) in the book ‘A Story of Untapped Potential: The Growth Strategy That’s Being Ignored’ and in this article I will elaborate on some of the more common themes that popped up.
Despite the term CRO being around for a while now, there’s a major lack of understanding around what it consists of.
On a simple level, many think that conversion optimisation consists of running A/B or multivariate testing and nothing more.
This couldn’t be further from the truth.
As Michael Aagaard from Unbounce explains, ‘I think of testing as a final check to verify whether all the research I conducted was impactful’. For so many out there, testing is the be all and end all.
Because of this, tests are being ran without any genuine rationale other than ‘instinct’ or ‘gut feelings’; without the why.
Inevitably, the massive gains that are read about don’t materialise and a cynical view of the effectiveness of CRO is developed, even though true conversion optimisation wasn’t actually practiced.
You have to be dedicated to finding the voice of the customer and base all tests on a variety of data sources if you want to achieve real impact.
Though data analysis can provide insight into what is happening (and is an essential component in optimisation) you need the qualitative data to really understand why things are happening. There are a variety of research methods to gain quantitative data: moderated user research, remote user research, polls, surveys and so on.
Sitting down with a business’s potential customer really helps you understand their desires and frustrations which in turn, removes the guesswork out of your hypotheses for testing.
Then once you have your fully developed hypothesis (and only then), you can A/B test it to validate your idea.
Hopefully it will provide an increase in your primary conversion metric but if it doesn’t, you still have a stockpile of customer learnings to go back to; a luxury you wouldn’t have with blind testing.
This is because essentially, conversion optimisation is about gaining knowledge on your customers; what makes them tick and what turns them off. Armed with this, you can start to ‘optimise’ your whole online experience, not just one button.
This is what leads to continuous business growth and leads on to my next point…
What really prevents businesses from feeling the impact of conversion optimisation can be drilled down to the lack of a defined process.
Some would think the worst case scenario would be testing on a whim; seeing something online and deciding it needs a change.
What I would argue is worse is when in-house teams genuinely believe they have a fool-proof process which churns out dozens of tests a month, only to find (when you examine the results) that they’ve had no effect at all.
Conversion optimisation can only be truly successful if it is brought into line by a comprehensive methodology.
When it is presented as a shiny new thing, it is hard to resist the temptation of getting it out of the box and playing with it straight away.
As Craig Sullivan explains, ‘testing requires a maturity…it also requires discipline’. It may not sound as much fun at first but the results will speak for themselves.
This is why the average test success rate of conversion optimisation agencies is closer to 60% compared to in-house teams’ rate of 30% – because agencies have optimisation methodologies.
They come in all shapes and sizes but the common denominators are: identifying problems and testing opportunities and prioritising them, creating a hypothesis for a test (why you’re testing), building the test (where you can get really creative, employ persuasion techniques and psychology), running the test, measuring and analysing the results and then feeding the learnings from that test into your continued optimisation efforts.
How many in-house teams have a testing process that follows these steps?
Though discipline may have a negative association for some, in reality it allows a team to stay channelled and focused in one direction, rather than ‘burning rubber doughnuts in the car park of split testing’.
How many times have you heard a business describe themselves as “customer-centric” or describe their service as one with the customer in mind?
It’s a term that is bandied about by many businesses but in reality, very few actually are. Yes, a true conversion optimisation programme can make a business customer-centric, you only need look at the success of Shop Direct Group and AO.com to see that.
But excluding the well-known examples, how many businesses are not testing or conducting any form of solid customer research, but are still making such big claims?
I’m sure you’ve seen the stat from Bain & Company which says 80% of CEOs believe they deliver a superior customer experience whilst only 7% of customers agree, proving how disillusioned so many businesses are.
As I stated in the book, this is because there is a huge disconnect with what businesses say they are focusing on and what they are actually doing on the ground.
The reason Shop Direct Group have had so much success is that their investment matched their mission statement; they didn’t just pay lip service to putting the customer first.
They radically shook up their internal structure so that it is more focused on improving the online experience in every facet, they built their in-house usability lab and continue to invest heavily in personalisation.
On the flip side of the coin, many businesses ‘play’ at optimisation and seem to be content to live in ignorance.
Maybe they are content to just invest in a testing tool for a few members of staff to manage, rather than the C-suite shaking up the internal structure and processes.
Maybe they believe analysing some data and using that as a basis for testing means they ‘understand’ the customer, rather than utilising a variety of research methods to gain qualitative and quantitative insights.
Maybe they prioritise the number of tests they run or 50 variations in an MVT, rather than investing more time and resource into two or three fully-developed tests that will really change consumer behaviour.
Breaking it down like that, it’s easy to see why there’s the huge gulf between C-suite opinion and customer opinion.
Intelligent, insight driven, transformative optimisation which is believed in and invested in can make a business truly customer-centric, rather than it just being a buzzword thrown around in board meetings.
These points just cover a small part of the larger conversation as to why conversion optimisation is a neglected growth strategy.
Hopefully by highlighting some of the issues and presenting some solutions, conversion optimisation will be truly recognised as the biggest growth lever for businesses the world over (and will finally get a chance in the starting line-up).