The social Web is under attack. Oddly, it's not under attack by viruses or other kinds of things that one might expect in the tech environment. Rather, it's under attack by marketing practices that are at best questionable and at worst…well, let's just leave it at that.
About a month ago, a friend in India contacted me regarding a campaign created for Kent Water Filters by SocialKonnekt. He had noticed something odd about the Facebook profiles supporting the campaigns: odd patterns in friend-to-friend conversations - hundreds of friends and not a single holiday wish, for example, along with a very high number of Facebook "Likes." The profiles in question looked more rented than real.
As I noted in a column for ClickZ.Asia, one of the "checks and balances" built into the social Web is that when large numbers of people look at lots of things, content that doesn't "look right" tends to stand out, and thereby creates more questions. The result is often just what should happen: things that are not what they seem get called out and exposed, restoring, so to speak, the overall integrity of the Web.
That's how it's supposed to work, at least in theory. Lately, it seems as if things are drifting further from that implicit "theory" as some marketers devise clever ways to game the systems that power the social Web. Now, last week's news: according to an article published in The New York Times, negative reviews had been used overtly to push a vendor to the top of Google's rankings.
Say what? Yes, you read that right: irritate enough people, and the negative comments that they will create added link credibility and thereby pushed that merchant to the top of search results. On the one hand, it's a fairly predictable outcome: negative reviews - just like positive reviews and photos of toddlers riding dogs - are legitimate forms of social media and therefore properly factor into Google's rankings (as well as other search tools.) If a product is really crappy, people want to know that, right? But intuitively, excessive negative reviews probably don't signal the best choice when shopping. And therein lies the problem: the social Web is driven more by the activity associated with content - likes, shares, views - than it is by the actual meaning of that content. But more on that in a minute.
With all of the stunts going on, you can imagine my reaction when - as advertised by the very non-pushy, non-commercial sounding radio host on NPR - I heard that a service called "Reputation Defender" could help manage online content and protect the reputation of a brand. One of many emerging reputation services, according to its own website, Reputation Defender allows "people would see the real (you)." That is, evidently, the "you" that has been made over by its "team of professional writers" with any contrary posts or "bad press" (accurate or not) conveniently pushed to the side. You can view the company video here. In defense of the firm, there is precious little recourse for truly wrong postings - except of course garnering a greater number of helpful posts, preferably by legitimate means. At the same time, direct manipulation of search to hide specific results raise its own set of questions, much like the purposeful use of negative reviews to boost search standings.
I'm not here to judge any of these products or services - it's not my place, and you can read about them for yourself and draw your own conclusions. What I am here to talk about are the tools that we all use to judge any product or service. And just like what has happened in traditional media, the social Web is now under attack. TV commercials that are louder than the programs they are embedded in, disclaimers that zip past at nearly incomprehensible speeds, and the general acceptance of "puffery" - a technical term for the legally protected "supersizing" of advertising claims are long-standing practices in traditional media.
So it's no surprise that the same battle lines are now being drawn on the social Web. Consumers are calling out the good, the bad, and the ugly, and the affected purveyors of those goods and services are looking to use (or manipulate) the resulting conversations to their benefit, putting consumers right back where they started: demanding better tools that help separate the wheat from the chaff. The idea of the social Web was that legitimate information could be shared - albeit imperfectly - through conversations, postings, ratings, and reviews published by others who had experience with whatever product or service under consideration. Though it's not a new development in itself, the conversations themselves are now being gamed on a scale sufficient to cause concern.
That's bad for consumers - and really bad for marketers. The fact is that the social Web is taking over as the place to vet what we first learn about anyplace else. As Pew's John Horrigan put it back in 2002, "No matter what your customer sees, hears, or reads on TV, the radio, or in print, it will be verified on the Internet." If, as marketers, we accept (or worse, condone and use) online shilling and fake content, we risk being kicked off the social Web - in other words, ignored outright. In my book "Social Media Marketing: An Hour a Day," I opened part IV with this:
"Campaigns will emerge on the Social Web that are counter to the long-term viability of social media based marketing. How you choose to proceed - what you choose to do in response to others' questionable practices - will in large determine the availability of these important social channels for your future use."
Posted throughout New York City are reminders that "If you see something, say something." It applies on the social Web, too.
So I was pleased to see the next level of escalation when Google injected a dose of sanity into the world's wild Web: reported in a second New York Times article, Google has tweaked its algorithms to account for the polarity of the conversations that drive ranking in addition to the overall conversational levels and link weighting. Responding to the original article, Google explained the changes to its algorithms:
"We were horrified to read about Ms. Rodriguez's dreadful experience. Even though our initial analysis pointed to this being an edge case and not a widespread problem in our search results, we immediately convened a team that looked carefully at the issue. That team developed an initial algorithmic solution, implemented it, and the solution is already live. I am here to tell you that being bad is, and hopefully will always be, bad for business in Google's search results."
Right on. Being bad is - and should be - bad for business. But that's just the tip of iceberg. Where Google is really heading is Web 3.0 - the semantic Web. Simply put, the semantic Web recognizes that a shopper looking for merchandise is looking for not only product information and the names of merchants who sell it (conventional search) but as well for the best merchants to buy it from. If someone asked you for a suggestion on a nearby restaurant, would you offer the names of every nearby eatery, or only those that you knew to be good? The difference in answers is tied to the implicit meaning posed in the question: if the motive for the question is building a directory, the answer is "all of them." If the motive is getting a good deal, the answer is necessarily quite different.
Web 1.0 made everyone a publisher, and websites ruled the day. Web 2.0 connected us through social graphs, providing insights and conversations that helped us make smarter choices. Social networks rose to power as millions - perhaps just now crossing into the billions - joined in. Search helps us find the content we are interested in. Web 3.0 - moving past an activity-based understanding of Web content - is introducing tools that directly expose the meaning contained in Web content, a development of great benefit to consumers. Google is taking steps now to provide search results that reflect not just the existence and connectedness of content, but its actual meaning in the context of what customers are looking for. And chances are, it's not bad service they are after, nor is it products and services that don't work as advertised. Let's welcome Web 3.0, and put the games back on the shelf in the family room where they belong.
Dave is the VP of social strategy at Lithium. Based in Austin, Dave is also the author of best-selling "Social Media Marketing: An Hour a Day," as well as "Social Media Marketing: The Next Generation of Business Engagement." Dave is a regular columnist for ClickZ, a frequent keynoter, and leads social technology and measurement workshops with the American Marketing Association as well as Social Media Executive Seminars, a C-level business training provider.
Dave has worked in social technology consulting and development around the world: with India's Publicis|2020media and its clients including the Bengaluru International Airport, Intel, Dell, United Brands, and Pepsico and with Austin's FG SQUARED and GSD&M| IdeaCity and clients including PGi, Southwest Airlines, AARP, Wal-Mart, and the PGA TOUR. Dave serves on the advisory boards for social technology startups including Palo Alto-based Friend2Friend and Mountain View-based Netbase and iGoals.
Prior, Dave was a co-founder of social customer care technology provider Social Dynamx, a product manager with Progressive Insurance, and a systems analyst with NASA| Jet Propulsion Labs. Dave co-founded Digital Voodoo, a web technology consultancy, in 1994. Dave holds a BS in physics and mathematics from the State University of New York/ Brockport and has served on the Advisory Board for ad:tech and the Measurement and Metrics Council with WOMMA.
May 22, 2013
1:00pm ET / 10:00am PT
June 5, 2013
1:00pm ET / 10:00am PT