Google just released Perspective, an API for “improving” conversations online, and one I find deeply disturbing.

According to the marketing site, “discussing things you care about can be difficult,” and “the threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions.”

If this isn’t sounding dystopian enough for you already, the Perspective “machine learning model … [scores] the perceived impact a comment might have on a conversation” and, you guessed it, filters out the ones it doesn’t like.

The marketing site features a sliding filter to show how the technology works, filtering out comments such as “It’s real, scary, obvious. Humans need to focus and stop destroying habitats, stop developing erosion because they want to live on a mountain/hill, stop disposing of trash in ways unsafe for the environment. Stop the greed, honor other forms of life” and filtering in comments such as “It’s a natural phenomenon.”

The tool allows “readers to more easily find relevant information,” presumably by hiding information that might offend them. It’s TripAdvisor for opinions, basically, but instead of traveling in style, you’ll be spared from ever looking at an opinion you don’t like.

If this ever approaches widespread use, what you see online could be at the whim of a Google algorithm. Something expressed in less than perfect English would be nuked on sight. Satire would be a distant memory. Hyperbole reduced until all that was left was tedious think-pieces that use words like “problematic” and “misappropriation.”

It wouldn’t be enough to call an idea “fucking shit” anymore, you’d need to say it was “fundamentally misconceived.” Which might or might not be what you meant.

Look, one of our great facets as human beings is our ability to parse information and reason critically about it. Metacognition is what separates us from, well, everyone and everything else in the animal kingdom.

And wonderful though technology is, we come pre-equipped with our own API for critical reasoning, honed by thousands of years of evolution, and with a training set of everything we’ve ever done and seen.  We can use this for everything from reading John Stuart Mill and Immanuel Kant to ordering an Uber or deciding what to watch on Netflix.

So no, I’m not yet ready to abstract the very thing that makes us human to an algorithm. Why would I want to? After all, I’d only be using a copy of the master copy, that at best will only replicate human biases.

And what if this algorithm decides it doesn’t like certain arguments? Like those written in Ebonics, or Mandarin, or Lulspeak? Will everyone have to get a degree in internet-approved grammar before they can publish an opinion online in future? And what are the consequences of that?

My first instinct, and I hope yours, was to push back the slider.

P.S. – I pasted the first paragraph of the Google diversity memo into it, and it got a “10% likely to be perceived as toxic” score. Make of that what you will.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s