Real customer loyalty doesn’t come from loyalty programs. It comes from great user experiences. Reward programs, on the other hand, are used as incentives for users to transact with your company again based on some type of reward.
Today, we continue with fixed and variable ratio schedules. These rewards are highly effective and can be applied to all types of businesses, online and off-.
Fixed/Variable Ratio Schedules
Unlike a continuous reward schedule, a ratio schedule doesn’t reward a user all the time. It acts like a counter, rewarding the user either after a specific number of times or a variable number of times, which averages to a constant number.
Ratio schedules that operate on an exact number (e.g., every 10 purchases) are called fixed ratio schedules. Punch cards are a good example. At Subway, a card is punched (or stamped) with each qualifying purchase. Once you have the required number of punches, you get a free sandwich. My favorite pizza place in college had a similar card: purchase 10 slices, get one free.
Ratio schedules operating on a fuzzy number are called variable ratio schedules. A lottery is one example. Buy a ticket and the chance of winning may be one in a million. Even if that number were smaller (e.g., 1 in 20), there’s no guarantee you’ll actually win every 20th time. But the odds still average 1 in 20.
Many businesses have mini-lotteries. Some stores give out scratch cards with every purchase that offer a chance to win merchandise. McDonald’s used to do this with its Monopoly program (the cards may or may not contain coupons). Soda companies’ bottle caps are a type of scratch card for sweepstakes. The message under the cap may be “You’re a winner” or “Sorry. Please try again.”
Vinny T’s, a restaurant in the Boston area, has a terrific approach. At the end of every meal, your waitperson comes to your table with a bottle filled with numbered balls. If the ball that comes out of the bottle matches your table number, you get your meal free.
How effective are these schedules? Let’s weigh them against our reward schedule measurements: learning curve, frequency, and decay.
Fixed Ratio Schedules
- Learning curve: medium-high
- Frequency: high, but scalloped
- Decay: medium
The learning curve, or the time it takes for the user’s behavior to adjust based on the reward schedule, is relatively fast. It’s not as fast as a continuous reward schedule because the reward takes a bit of time before it happens. This slows user acknowledgement of the schedule.
Frequency is also pretty good. We prefer to shop where we know our interaction counts toward something greater. A scalloped frequency means frequency increases as the reward gets closer, then drops once the reward is given. Plotted on a graph, the frequency curve would have a lot of scallops in it.
These systems should reward users relatively frequently. Otherwise, you risk a huge traffic slowdown for people who either are just starting a punch card or were just rewarded. What if Subway’s promotion was “Buy 75 Sandwiches, Get One Free”? The reward is so small compared to the effort it takes to get it, the entire program would be useless. Yes, interaction would increase as a user hits the 70 mark (only five away), but interest would wane significantly as that user faces the daunting task of starting over at one. For these programs to be effective, the reward must come relatively quickly, commensurate with its size.
This schedule’s decay rate is pretty good. Because users aren’t used to constantly receiving rewards, they don’t mind so much if the reward schedule is removed. In the Subway example, I’m used to paying for my meal 12 times out of 13. So the next 12 times I purchase something after the reward schedule has been lifted, my experience is no different. It’s not until the 13th time I feel the difference: I paid for that meal instead of getting it free. Big deal. My shock and alienation aren’t great enough to turn me off Subway.
Variable Ratio Schedules
A variable ratio schedule is a close cousin to its fixed counterpart. The metrics, however, are significantly different. Let’s take a look:
- Learning curve: low-medium
- Frequency: high, consistent
- Decay: very slow
Because this reward system isn’t as determinate, users don’t expect as much. We play the lottery, although the chances we’ll win are slim. The lottery works not only because of its variable ratio schedule but also because the reward is so much greater than the cost of entry. If the lottery prize was $50 and it cost $20 to enter, you’d never play it (particularly if the odds were the same).
A relatively slow learning curve is the only major drawback with this reward schedule. This isn’t the kind of system that provides an instant sales upswing (as a continuous reward system does). But it will generate loyal users over time. They’ll stick with you long after the reward is removed.
This schedule’s frequency rate is high. Because the user doesn’t know when the reward will come, he’ll happily transact with your company repeatedly, waiting for it to happen. Whereas the fixed version shows a scalloped behavior, the variable version doesn’t. That’s because another reward could come immediately after the first reward was given. There’s no strict waiting period between rewards.
The best part of this schedule is its decay rate. There’s a very slow decay (particularly compared to the fixed version and the continuous reward schedule). With a continuous reward schedule, decay is near-instantaneous. The minute the reward stops, people stop interacting with the company. How could a small reward schedule change (e.g., 10% off every X purchases instead of every purchase) drastically affect the decay rate of customers?
As interactive marketers, we must attract and keep users. When Amazon.com offered free shipping in the late ’90s, every online merchant followed suit within a week. Is that the best idea? What if, instead of creating a similar reward, these companies offered a promotion that operated on a fixed or variable ratio schedule instead of a continuous one? They would have seen a small upswing in sales.
In a world of short-term growth, this would have seemed like a less successful program. Long term, it might have been a wiser choice. Eventually, the cost of free shipping was too high. Companies discontinued the program (later to reintroduce it with many more restrictions).
Because of the drastic difference in decay rates, removing a continuous reward would have been much more detrimental than removing a ratio-based reward. I speculated Amazon kept its free shipping as long as it did to see how long its competitors would last. Amazon had more money and could sacrifice its margins much longer than its competition. If that’s the case, Amazon would have used a continuous reward schedule (assuming everyone else would follow suit) just to squeeze out competition that couldn’t afford to maintain the promotion.
Next: interval rewards. Then the most interesting aspect of these schedules: how they affect each other, and how to use them to compete. Understanding this will enable you to create programs that provide a competitive advantage without sacrificing margins.
These theories are works in progress. Tell me your thoughts, and share any case studies that prove or disprove them.
Until next time…
As an organisation, finding the right marketing channels is an essential part of your marketing strategy.
When measuring the effectiveness of discount codes, retailers often get it wrong. In this article, we'll look at how data-driven attribution can help businesses better understand where discount codes produce the best ROI.
Data. It’s the latest ‘buzzword’ in the digital marketing world when it comes to content.
Digital has quite forcefully overturned the entire media industry, causing even the most traditional companies to adapt or be left behind.