Jigsaw, a technology incubator that is a part of Google, and Google’s Counter Abuse Technology team want to rid the web of bad comments.
To this end, last week they announced the launch of Perspective, an API “that makes it easier to host better conversations.”
Perspective “uses machine learning models to score the perceived impact a comment might have on a conversation” and can be used by publishers to identify and filter out comments that are likely to be “toxic.” When fed the content of a comment, the API provides a percentage likelihood of the content being deemed toxic.
A toxic comment is a “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion” and Perspective’s toxicity model has been trained by asking real people to rate real comments on a scale that ranges from “very toxic” to “very healthy.” The Perspective website offers a free tool that demonstrates how the Perspective API rates sample content.
Jigsaw does note that “it’s still early days and we will get a lot of things wrong,” but already, a number of prominent publishers are experimenting with the Perspective API. For example, The New York Times and The Guardian are working on moderation tools that aim to improve the quality of conversations within their reader communities, and Wikipedia is testing how it can better detect attacks against its editors.
How far should publishers go?
Few would argue that “mak[ing] it easier to host better conversations” online is an unworthy goal. Trolling and personal attacks can quickly destroy comment sections and forums and unfortunately, they seem to be increasingly common.
But in some cases, there’s a fine line between “toxic” contributions and contributions that, while perhaps negative in tone or argumentative, are entirely legitimate and conducive to beneficial discussion.
In this author’s test of the Perspective API, it would appear that a few words can have a significant impact on how a comment is rated. For example, the comment “I do not agree. You have distorted the point of the article and are intentionally misrepresenting the facts” is deemed by the Perspective API to be 13% similar to comments people said were “”
But change the first sentence from “I do not agree” to “That’s silly”, and the percentage more than doubles to 34%.
Over time as it collects more data and user feedback, Perspective’s model should improve, but the problem for publishers hoping to rely heavily on this technology is that when it comes to the possibility of “censoring” contributors, for obvious reasons, it’s hard to entrust decisions to a machine.
Because of this, publishers looking to foster better conversations among members of their virtual communities would be wise to consider that ultimately, the conversation quality challenge is a problem created by humans, and must be solved by humans.
Advertisers are more concerned than ever about brand safety, and one of the primary ways they're trying to keep their ads from appearing in unfriendly places is through whitelisting. But as more and more brands turn to whitelisting, some are talking about the impact this will have.
Many companies use SMS, email and push notifications to deliver updates to customers and stakeholders, and such notifications are especially important to publishers ... read more
A recent rise in the need for higher scalability and agility has led people to start looking at deploying their CMS to the cloud. With the multitude of devices and platforms currently available, the headless architecture is being viewed as the modern answer to these problems.
For the publishing industry, 2016 saw the rise of news aggregators – mobile-friendly apps able to deliver personalized, ultra-relevant content from multiple sources in seconds. Here are five of the most interesting and innovative.