By JOSH O'KANE
Saturday, August 11, 2018
As social-media platforms such as Twitter and Facebook have become central hubs of conversation this century, their role in amplifying harmful rhetoric has itself become a source of debate. Swarms of bots, "fake news," hate speech and conspiracy theories have led even Twitter chief executive Jack Dorsey to acknowledge that the "health, openness, and civility of public conversation" is at a low.
Enter Alex Jones, the onceniche, far-right-wing digital broadcaster who through his brand, Infowars, has perpetuated, among other conspiracy theories, that the 2012 Sandy Hook Elementary School shooting was staged with "crisis actors."
Earlier this week, social media and content platforms including Apple, Facebook, YouTube and Spotify removed some content and accounts affiliated with Mr.
Jones, for reasons including hate speech and the glorification of violence. Twitter, however, did not, which Mr. Dorsey said in a tweet was because he hadn't broken the platform's safety rules, which say that someone must "cross the line" into threatening violence to make a violation. (A spokesperson for Twitter Canada declined to add further comment on Mr. Jones, citing Mr. Dorsey's tweets as the company's statement.)
The discrepancy between Twitter and other platforms' responses to Mr. Jones only further fosters the debate over the policing role platforms should take for problems such as hate speech. Tarleton Gillespie, principal researcher at Microsoft Research New England in Boston, studies social media and has published a new book exploring these questions, titled Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media, via Yale University Press. He spoke with The Globe and Mail by email this week about Twitter and other platforms' decision-making.
Let's start with Alex Jones. He's known for pushing out fake news to the broadest degree, including awful conspiracy theories about the Sandy Hook shooting. How have social-media platforms enabled this kind of voice to propagate?
Social-media platforms have spent years convincing users, and themselves, that they're just hosts, that provide an open space for users to speak. But this is a convenient fiction. Now that we're getting a clearer understanding of how platforms work, and of how people take advantage of them, it's increasingly clear that they invite and amplify certain kinds of really troubling speech.
Because platforms seem to presume that everyone is participating on genuine and fair terms, they often overlook those who tactically use the system to their advantage, while appearing genuine. Alex Jones perfectly presses on a fault line that runs through social media today: He is willing to say things that are false and cruel, but dresses them sometimes as legitimate political speech, mere theatre at others, and his readers like and forward it like the latest viral cat video.
He produces the commodity they want and pretends to be the contribution they swear to protect.
Twitter took a different step than Facebook, YouTube or Spotify, which removed some of Mr. Jones's accounts and content. Do you think there's anything fundamentally different about Twitter's approach to dealing with users who allegedly post hate speech or encourage violence?
For years, platforms have been making these moderation decisions on our behalf. Whether they make good decisions or bad ones, the fact that they do it for us may be the core problem, because they're decisions that belong to the public: Where's the line?
Public outrage is the closest we have right now to collectively considering these hard cases. It's not necessarily a bad thing, at least in principle, that Twitter came to a different conclusion. I don't agree with their decision, myself, but it's probably a positive thing that platforms take different approaches to content moderation based on their values. Twitter's decision makes a kind of sense with how their platform works and with their philosophy in the past. We'll see how strong the public backlash is.
Twitter CEO Jack Dorsey tweeted that "critical journalists document, validate and refute such information directly so people can form their own opinions." Given what you've found in your research on moderation, is it fair for a platform such as this to pass off this responsibility?
The reality is, the major platforms lean on other experts and institutions in moderation all the time. Twitter has a "Trust and Safety Council"; Facebook brings in cultural and linguistic experts to help them moderate posts.
And they ask us to flag objectionable content.
But this kind of support is meant to help them make a sensible intervention, not to justify shifting the responsibility of discerning harm onto someone else. It's true that journalists should be reporting on the problematic aspects of Infowars. But whether they do, or how well they do it, shouldn't shape a platform's policy about whether Alex Jones gets to enjoy the benefits of that platform.
Mr. Jones's discrediting of true stories can be seen as an attempt, as well, to discredit the mainstream media - the very people Twitter expects to verify the controversial statements made by people such as Mr. Jones. Do you think this process gives misinformation and its sources an inherent advantage on a platform such as Twitter?
If the news media today were widely seen as unassailable truth-tellers, able to perfectly and fairly expose lies and call out those who try to defraud the public debate, maybe platforms could rest a bit easier about what speech they should and should not circulate. But, right or wrong, we currently do not have that; the news media are fighting to stay afloat in an environment of distrust and confusion, sown in part by people like Alex Jones.
We don't need platforms stepping back from a responsibility for the health of public speech right now - we need them stepping forward.
Mr. Dorsey also tweeted on Wednesday about moderating the discourse on its website: "Relying on algorithms alone will not work.
... We need to figure out how to help with economic incentives too.
We're behind on that, but thinking deeply about it." In this context, what do you think could work?
I think public debate and political pressure is beginning to work, to a degree. If critics are calling the platforms to task publicly for content that they didn't already see as worth removing, and the counterarguments for keeping Alex Jones are being heard, that's some version of a public dialogue.
The reality is that content moderation is constantly happening, it impacts low-profile users as much as high profile ones, and it is regularly being tested by new hard cases that are difficult to anticipate. So while public debate is good, something needs to gather these individual cases into a deeper and more coherent recognition of what speech like Alex Jones's represents - conspiratorial bluster dressed as legitimate speech but built to delegitimize other speakers using any means necessary. And someone, whether it's the platforms, public or policy-makers on behalf the public, needs to put forward a coherent approach to that kind of speech.
This interview has been edited and condensed.
Far-right conspiracies propagated on social media by Alex Jones, seen in Austin, Tex., in 2017, are fostering debate over the role social-media platforms should play in policing hate speech. ILANA PANICH-LINSMAN/NYT