The relationship between the MarTech industry and artificial intelligence (AI) is basically a sci-fi romance right now, with the entire industry delighted by robots in much the same way Joaquin Phoenix fell for his operating system in the movie Her. Amidst predictions that AI could generate $3.5-$5.8 trillion in the very near future for every industry from airlines to ecommerce, what’s not to love?
Well, according to consumers, plenty. The average consumer’s perception of AI seems to range somewhere from thinking its “creepy” to fears that children’s playthings are spies.
Yet, paradoxically, consumers actually seem to enjoy the ways that AI improves their communication with brands. Most are pretty comfortable talking to Alexa and even prefer a chatbot on Facebook Messenger if it means faster, more effective service.
This strange juxtaposition of suspicion and acceptance could have more to do with consumers’ lack of confidence in brands than it does their fear of robots, according to Stacy Simpson, chief marketing officer for Genpact.
“AI is a game-changer when it comes to improving the customer experience, yet real challenges remain regarding trust and privacy,” Simpson says. “One critical factor is visibility into how AI makes decisions. Brands need to track and explain the logic behind those decisions. Another key factor is data protection. How is the company protecting consumers’ data? Transparency in both of these areas has a dramatic impact on consumer comfort level.”
Right now, there’s a big discrepancy between the ways the public thinks AI works and how AI actually works. And it’s up to companies using AI technology to alleviate that confusion.
Many consumers are still confused about the definition of AI
Recent studies show that there’s a huge learning curve around consumers and AI. One study by Pega found that 72% of consumers across six countries said they understood AI, but only 41% actually knew that Alexa and Google Home relied on it.
In a similar survey, Genpact found that consumers also underestimate the effect that AI has on their lives, according to Simpson.
“Even with explosive growth of home digital assistants, chatbots, smart sensors, and more, consumers do still perceive that they have limited contact with artificial intelligence,” Simpson says. “According to our research, less than half of those surveyed say they interact with some form of AI once a week or more. And two in five believe that AI hasn’t made a difference in their lives. When you think about how much people are interacting with all forms of AI, it’s pretty astonishing that almost half don’t think the technology makes a difference in their lives.”
At its core, AI is simply the science of making machines think like humans, and while many consumers are still not clear on that definition, recent data breaches and news stories about spying dolls and eavesdropping robots are enough to make them worried that AI poses a threat.
But even brands are confused about privacy and security
Yet even the language we use to describe those threats is often unclear, according to Avi Goldfarb, professor of marketing at Rotman and author of Prediction Machines: The Simple Economics of Artificial Intelligence. While the terms “security” and “privacy” are often used interchangeably by marketers and consumers alike, they actually have vastly different meanings.
“A lot of people mix up privacy and security,” Goldfarb says. “When we use the term security, we should be talking about safety from crimes, like identity theft. Privacy, on the other hand, is what Goldfarb calls an “underlying preference for information not to get out.” Both involve risk, just different kinds.
While consumers are most worried about security, privacy still matters
It’s important that we understand the difference between security and privacy in order to get to the bottom of consumer fears about AI. For example, the Genpact study purports to have found that 53% of consumers believe the government should be doing more to protect data, but it’s unclear what consumers believe their data should be protected from. Should it be protected from hackers, unwanted ads, or some combination of the two.
And as governments rush to catch up with consumer concerns, it often falls on brands to self-regulate for not only consumer security, but also for consumer peace of mind.
Here’s how brands can help
Instead of burying policy in a document that would put a lawyer to sleep, according to Simpson, it’s better to actually educate users as to what you’re doing with their data.
“To be successful, brands need to walk a mile in their customers’ shoes and get personal,” Simpson says. “Imagine yourself as a customer turning personal data over to a corporation. What questions would you ask? Get ahead of these by providing proactive answers, visibility into how their data is going to be used, clarity on how it’s being protected, and the benefits they are going to get by sharing this information that they wouldn’t otherwise get.”
And make sure your AI actually is adding value
A study out of the University of Pennsylvania found that consumers often give data because they feel forced to, with 41% of respondents indicating that they have little control over the data that brands collect.
Humanizing AI for consumers who often feel as if technology is designed to take advantage of them starts with offering real value, instead of a barrage of intrusive-seeming advertisements.
“Where the value exchange is clear, people more freely give up their data in exchange for getting something they want,” Simpson says. “More precise recommendations that shorten or eliminate the need for product research, tailored interactions, speed of use and ease of transactions make users more comfortable with AI.”