Google is using machine learning to go after online trolls.
In partnership with Alphabet subsidiary Jigsaw, Google has launched Perspective, a tool intended to identify toxic online comments. It's available as an API, so news organizations and publishers can use it to weed out abuse.
Perspective will score comments on how likely they are to be abusive, comparing them to comments that have been rated by human reviewers. "Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments," Jigsaw President Jared Cohen wrote in a blog post.
Publishers that use Perspective can decide how to handle comments the system identifies as toxic.
"For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation," Cohen wrote. "Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones."
Google has been testing Perspective with the New York Times, which currently only allows comments on about 10 percent of its articles since it's so time-consuming to monitor them. Even at 10 percent, the publication still fields about 11,000 comments every day.
"We've worked together to train models that allows Times moderators to sort through comments more quickly, and we'll work with them to enable comments on more articles every day," Cohen wrote.