A Proposal To Unmask Hate On Twitter, Without Abandoning Anonymity

Harassment and hate speech threaten to poison the social media waters, making them toxic for users and marketers. Columnist Eric Schwartzman proposes a solution.

Chat with MarTechBot

ML

With any marketing medium, there’s a certain amount of pollution that can create a toxic environment unsuitable for brand activity. With direct mail, the more irrelevant junk mail a person receives, the more challenging it will be to break through and get your piece opened. With email, it’s spam.

On social media, there’s the danger of other users — either “friends” or other brands — poisoning the medium through spam or just plain bad behavior. As marketers, and as users of social media, we need to demand more from the networks. In this piece, I’ll suggest a remedy that could make social networks much more hospitable places for all concerned.

Contrary to former Facebook CMO Randi Zuckerberg’s comment that “Anonymity on the internet has to go away,” the problem isn’t anonymous users, it is a lack of transparency from the social networks.

Anonymity Doesn’t Have To Go

Think about it.

Anonymity is bad and good. It’s bad because anonymous users hiding behind pseudonymous screen names can post whatever defamatory hate speech they chose, with no fear of accountability.

But it’s good too, since, as Justice John Paul Stevens wrote in the 1995 Supreme Court decision in McIntyre v. Ohio Elections Commission:

[blockquote]”Anonymity is a shield from the tyranny of the majority…. It thus exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular individuals from retaliation — and their ideas from suppression — at the hand of an intolerant society. The right to remain anonymous may be abused when it shields fraudulent conduct. But political speech by its nature will sometimes have unpalatable consequences, and, in general, our society accords greater weight to the value of free speech than to the dangers of its misuse.”[/blockquote]

So unmasking identities isn’t the answer. Memorializing bad behavior is.

How Might Transparency Be Increased?

In her departing Sunday Bits column as digital technology and culture reporter at the New York Times, Jenna Wortham explored the inherent tension between free speech and harassment, asking what else Twitter could do to curb verbal abuse. After all, “Twitter has been fairly agile about tweaking its services and rolling out new features — when it has chosen to do so,” she writes.

What could they do to deter hate speech, online bullying and defamation? It’s not an insignificant issue. Wortham notes a Pew Research survey from 2014 that found that 73% of adult internet users have witnessed online harassment, and 40% have experienced it themselves.

“Given Twitter’s ability to be inventive…I wonder what else it could be doing to curb verbal abuse” asked Wortham.

The truth is, social networks could end bad behavior by making it easier to identify trolls. But not by revealing their identities.

Ending anonymity and hiding bad behavior by removing it is not the answer. Instead of disappearing harassment, why not transform it into a badge of shame?

A Badge Of Shame

What if Twitter, Facebook and Yik Yak displayed the opposite of your Klout score on your profile? What if they added a Harassment or Hater Score to every user profile, so you could easily assess that user’s past behavior at a glance?

Users could keep their anonymity and social networks wouldn’t have to become arbiters of free speech.

All they’d have to do is leverage their own statistical analytics, data mining techniques and business intelligence know-how to call out the bad actors. The same technology used to target promoted tweets or suggest new people to follow could be used to identify problem accounts. They could even introduce a Human Rights Score and a Racial Sensitivity Score as well, and develop icons that accompany each tweet sent from that account.

Harrassment-Scores

Technology is not the obstacle. The real questions is what it will take to get the social networks to make this a priority?

Even with the Federal Trade Commission breathing down Madison Avenue’s neck, some advertisers still resist making required FTC social media disclosures when they hire social media mavens to hawk products via their personal profiles.

“Just Be Nice” Won’t Cut It

Unfortunately, you can’t leave this one up to individuals, trusting them to just be nice. And government regulators lack the sophistication, resources and agility to hold the social networks accountable for providing this level of transparency.

It’s an issue of morality. The social networks have a responsibility to make it easy for users to see the true colors of their members. And, more practically, this poisonous content pollutes the social media waters that we all swim in.

In the social age, living up to the promise of “don’t be evil” requires more than just a “report spam” button.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Eric Schwartzman
Contributor
Eric Schwartzman is a Los Angeles-based author, growth marketing advisor and media relations coach.

Get the must-read newsletter for marketers.