Lessons Learned From OKCupid and Facebook: The 7 Hows and Whens of Experimentation

There is nothing wrong with experimenting and testing - in fact, it's extremely valuable. But remember to think about how and when you go about experimenting with your consumers.

Perhaps by now OKCupid is regretting the rather provocative blog title of “We Experiment on Human Beings!” The post deliberately added fuel to the social experiment fire kindled by Facebook’s much maligned “emotional contagion” research.

The furor began when Facebook published results from two parallel studies conducted on 689,003 unwitting subjects in the “Proceedings of the National Academy of Sciences of the United States of America.” In the psychological study, Facebook researchers sought to establish whether users’ moods were affected by the posting behavior of their friends. The less-than-earth-shattering conclusion of this was that there is indeed a correlation between the positivity of your own posts and those of your network. When the press got wind of the study, it created outrage amongst commentators and consumers alike. Even its author, Adam Kramer, ended up admitting on his Facebook page that “In hindsight, the research benefits of the paper may not have justified all of this anxiety.”

Not to be outdone, OKCupid weighed in on the debate and Facebook’s defense on their blog shortly after. Their post said they had noticed recently “that people didn’t like it when Facebook ‘experimented’ with their news feed.” They argued that experimenting is fine because everyone does it all the time – just without you knowing, and without it websites wouldn’t work. They then went on to give three examples of experiments they have secretly conducted in the past.

The first two were relatively innocuous and haven’t caused much contention. One involved removing profile pictures from their site for “Love Is Blind” day, and the other involved hiding text from a group of profiles for half the time to see how they were rated with and without. The conclusions were that people are as shallow as technology allows them to be, and that apparently online your looks and personality are pretty much the same thing.

The third, more controversial, experiment took pairs of bad matches and told them that they were actually exceptionally good for each other. It turns out that if OKCupid tells people that they are a good match, then they act as if they are. If they are genuinely a good match, but OKCupid tells them they are not, things are less likely to work out. Shocker. Even less surprising, it turns out that the ideal situation is to be told that you are a great match and actually be one.

The reaction to the experiments detailed in OKCupid’s blog has not been as indignant as the maelstrom that was created by Facebook’s manipulation of news feeds, but news sources are reporting that the tests may still have violated FTC rules on deceptive practices.

As marketers, the lesson we must learn from negative press reactions and the ensuing consumer backlash these two companies have faced is that as an industry we have a responsibility to our clients to ethically advise on how and when to use consumer data for testing purposes.

OKCupid was not wrong in admitting to experimentation. We all know and understand that in order to stay relevant and connected to current and prospective consumers you have to experiment with them and the ways in which you connect with them, connect them to each other, and the content you serve them. Where OKCupid fell down was in how and when.

At POSSIBLE our entire philosophy stems from data, analysis, and testing. We call this philosophy “Does It Work?” and it’s a question that informs everything we do, including how and when we go about testing. With every project, we ask: Did we accomplish our goal? Did the work evoke an emotion? Was it easy to use? Ultimately, did it create action? Sounds simple, but like most simple things it’s awfully easy to get wrong.

There are three basic rules to how to test that can prevent a lot of problems:

1. Be Transparent

A data-usage agreement at registration is not a waiver to do what you like to your users; Facebook’s premise that this creates informed consent is absurd. Who has ever read one of these? OKCupid’s argument that people behave differently when they know they are being tested on is not a sound defense either. Make your intentions known before, during, and after you test. Keep your consumers informed.

2. Give Consumers Control

You can invite users to beta test, prototype test, and be part of a focus group without giving away the exact functionality you are analyzing. Once someone has agreed to be tested on then there is a mutual understanding that their experience will be tampered with and altered. It becomes hard for someone to complain about being tested on if they are aware that they agreed to it in the first place.

3. Show Value

After performing a test, show what you have learned and how you plan to make your product or experience better as a result of that knowledge. If you can’t show value for the test you have performed, then you are probably not getting the when of testing right and shouldn’t have done it in the first place. Facebook’s “experiment” was supposedly academic and not tied to product improvement, but even academically the author has said that the benefits weren’t worth it. Whilst OKCupid has belatedly been transparent, it is hard to see what value their tests created for their users other than to reaffirm that their algorithms work.

Both Facebook and OKCupid transgressed all three of these rules. They didn’t communicate to users that tests were taking place, they didn’t allow users to opt in, and they didn’t provide any clear enhanced value as a result of the tests.

The question of “when” to test is equally as important as the “how.” There isn’t a single “when” either – it should be a relentless, iterative process. This ongoing process can be divided into four key phases, all of which help us to answer “Does It Work?”:

1. Is It Likely to Work?

This is what you need to ask yourself right from the word go. You can do this by performing social sentiment analysis and research in the nascent stages of a project. This can eliminate a whole load of problems before they have even materialized – such as understanding the likelihood of a subject being positively received by your audience. Facebook and OKCupid could have tried this with the topics they planned to test…It really is very simple to do but is so often wrongly skipped to save time. The next step is user testing, surveys, or sometimes focus groups to corroborate what you have learned from your initial research.

2. Is It Working?

Once the project, product, or campaign is live, ask “Is it working?” and analyze data to check. If the results of your analysis pose questions that require answers, you can further experiment with new tests as long as you go about them in a transparent manner.

3. How Can We Make It Better?

This goes back to showing value. Consumers will understand and forgive a lot if there is a visible improvement in the service or communication as a result of your experimentation. As an industry we should never stop asking ourselves “How can we make it better?”, and never lose sight that this should be our goal.

4. For Everybody?

This isn’t “for the majority,” this is about making your product better for everyone. We have a duty to give all of our users and consumers the best possible experience, something that OKCupid and Facebook may have lost sight of along the way. Segmentation can provide analysis on whether you are staying true to this; whether that means delivering an exceptional mobile experience, offering a product in a number of languages, or making sure that you follow the most stringent accessibility guidelines.

On the scale of falling fowl of these guidelines, OKCupid isn’t quite in the Facebook league of disobeying pretty much all of them, but they still could have averted a lot of the current hostility they are facing if they had asked more of these questions earlier in the process. At the end of the day, they lied to some of their audience and most likely got a whole load of mismatched people to interact and even go on dates who otherwise would not have bothered, with no obvious end benefit to those users.

The final thing to remember about testing is not to be afraid of failure. Even if you follow guidelines and best-practices, your experiments won’t always work, otherwise they wouldn’t be experiments. As OKCupid said on its blog, “Most ideas are bad. Even good ideas could be better. Experiments are how you sort all of this out.”

Don’t be afraid to try new formats and to invest in new platforms to keep up with the networks and brands that distribute and host your content. Two parting ideas for where to think about experimenting are: Facebook’s Unpublished Page Posts, which were only introduced last year to allow brands to serve up tailored content to different segments of your audience, but now account for 50.1 percent of worldwide ad spend on the platform:

share-of-facebook-advertising
And the second is to experiment with increasing your spend in mobile even if desktop is still outperforming it on current ROI and last-click attribution. eMarketer predicts that 2014 will be the first year that desktop ad spend declines, whilst mobile continues to rocket.

us-digial-ad-spend-growth

Industry experts at Google are actively encouraging clients to pay less heed to cost-per-click (CPC) and are often recommending you put as much as 20 percent of your digital ad-spend into testing different formats on mobile. This will allow you to discover what works with your consumers on the platform. Try analyzing your overall weekly/monthly conversion rates as you perform different tests, rather than focusing on the direct effect of each tactic.

These are just two ideas to consider today; there will be new things to experiment with tomorrow. So please try lots of new tactics and invest in new platforms. Just remember to think about how and when you go about experimenting with your consumers. If you ask yourselves the right questions at the right times, then counter to what JFK believed, you might not need to fail quite so miserably to achieve greatly.

Subscribe to get your daily business insights

Whitepapers

US Mobile Streaming Behavior
Whitepaper | Mobile

US Mobile Streaming Behavior

5y

US Mobile Streaming Behavior

Streaming has become a staple of US media-viewing habits. Streaming video, however, still comes with a variety of pesky frustrations that viewers are ...

View resource
Winning the Data Game: Digital Analytics Tactics for Media Groups
Whitepaper | Analyzing Customer Data

Winning the Data Game: Digital Analytics Tactics for Media Groups

5y

Winning the Data Game: Digital Analytics Tactics f...

Data is the lifeblood of so many companies today. You need more of it, all of which at higher quality, and all the meanwhile being compliant with data...

View resource
Learning to win the talent war: how digital marketing can develop its people
Whitepaper | Digital Marketing

Learning to win the talent war: how digital marketing can develop its peopl...

2y

Learning to win the talent war: how digital market...

This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. It provides expert insight on how companies can ret...

View resource
Engagement To Empowerment - Winning in Today's Experience Economy
Report | Digital Transformation

Engagement To Empowerment - Winning in Today's Experience Economy

2m

Engagement To Empowerment - Winning in Today's Exp...

Customers decide fast, influenced by only 2.5 touchpoints – globally! Make sure your brand shines in those critical moments. Read More...

View resource