The Al Capone Approach to Anti-vaxxers

Posted by on September 23, 2021 11:02 am
Tags:
Categories: Everything Else

At the end of August, Reddit users told the company’s leadership they had blood on their hands. As part of an organized protest, the moderators of dozens of large subreddits, or forums on the site, shared a letter condemning Reddit for failing to act on the “rampant” spread of COVID-19 misinformation and allowing conspiracy-minded anti-vaccine subreddits to proliferate. The letter emphasized that vaccines are safe, masks are effective, and social-distancing measures are useful. “Subreddits which exist solely to spread medical disinformation and undermine efforts to combat the global pandemic should be banned,” it said.

Reddit’s CEO, Steve Huffman, responded with his own open letter noting that “dissent is a part of Reddit and the foundation of democracy,” and that those who disagree with the CDC are not violating the site’s policies. Shortly after his post, many of the moderators who had shared the letter shut down their subreddits in outrage. (The blackout protest included the 3.3-million-member forum for Pokémon Go, which became its own news item.) In the end, Huffman did take action: On September 1, Reddit removed its most notorious subreddit for anti-vaccine conspiracy theories, called r/NoNewNormal, and “quarantined”—meaning, covered with a warning screen and removed from search results—54 others, including the subreddit dedicated to the antiparasitic drug ivermectin. “r/NoNewNormal has been banned!” read a post in the subreddit dedicated to on-site drama. “Discuss this dramatic happening here!” (Others celebrated with memes, obviously.)

But Reddit’s leadership hadn’t quite acceded to the protesters’ demands. The moderators’ letter had specifically asked for special attention, enforcement, or rules around COVID-19 misinformation. That’s not exactly what they got. The ban of r/NoNewNormal was not on account of its users’ habit of sharing health misinformation and disinformation, but rather for “brigading”—that is, their history of attacking other Reddit communities with spam or trolling. The 54 quarantined subreddits were also not cited for any specific mangling of true facts but rather for violating the first rule of Reddit’s content policy, which protects users from bullying, hate speech, and threats of violence. That rule, a spokesperson explained to me in an email, can be interpreted to prohibit the spread of “falsifiable health information that encourages or poses a significant risk of physical harm to the reader.”

In other words, when confronted with a problem that its policies did not cover—health misinformation—Reddit jerry-rigged a fix. It used tools developed for a prior crisis in content moderation, over hate speech and harassment, and adapted them to meet the present one. We should expect this drama to play out again and again in the months to come. Platforms, their users, and social-media experts are still in the early stages of debating what effective content moderation around health misinformation should look like, and many of them know that overbroad policies would create serious problems and dramatically limit reasonable debate. Until that tension is resolved, the platforms are relying—to a significant extent—on anti-vaccine activists’ and COVID-19 conspiracy theorists’ inclinations to break unrelated, existing rules. For the moment, it’s the best fix available.


Reddit is notorious for having spent a very, very long time deciding that it should deal meaningfully with harassment, hate speech, and other forms of abuse. The platform was a free-for-all for many years, rampant with misogyny and racism, and a good chunk of the user base had to be led gradually toward policies that could protect the speech of some by limiting the speech of others. (Also, a bunch of really hateful users had to be removed.) To many of its current users, the new moderation debate—the one over health misinformation—should be less complicated than what came before. “If people are discussing whether Bigfoot is real, okay. If people are arguing over political candidates, okay,” said a moderator of the popular advice forum r/AmITheAsshole named Frank, who—like the other moderators quoted in this story—asked to go by his first name out of concern about harassment. “I think that there’s a substantial difference when it comes to questioning basic science in the midst of a global pandemic. While [Reddit’s leaders] aren’t legally liable, certainly they’re ethically liable for permitting ideas that are dangerous to the health and lives of their user base.”

Some scientific facts are black-and-white, of course, but moderators may be underestimating the challenge of determining which facts are which in the contest of a broad policy on health misinformation. “Harassment is a question of values—what is too mean or too harmful,” James Grimmelmann, a professor at Cornell University Law School who has written extensively on content moderation, told me. “Health information is about truth—what is the truth about how viruses spread, what drugs prevent them, what other measures prevent them?” Moderating for the truth is easier said than done, he told me, especially when scientific facts are in the process of evolving. At the beginning of the pandemic, for example, the “truth” was that regular people shouldn’t buy masks, because we needed to save them for health-care workers. “That understanding changed,” Grimmelmann said. “Platforms would have been doing harm if they had moderated that on a bright line.”

[Evelyn Douek: The lawless way to disable 8chan]

In an emailed statement, a Reddit spokesperson made a similar argument: “The term misinformation can be vague and difficult to define, given the pace of changes to our understanding of COVID-19, vaccines, and public health guidance over the past year,” they wrote.

Reddit’s moderators do seem to understand this. They recognize that asking social-media companies to adjudicate medical information is complicated and could produce sloppy results. Researchers and academics have also pointed this out. Robyn Caplan, a doctoral student at Rutgers University, called COVID-19 “a crisis of content mediation” last year, writing, “What we must ask now is whether we trust tech companies to play this role of reconciling the user-generated internet with hierarchies of knowledge production.” Still, the Redditors are asking for … something. When I spoke with a moderator of the Star Trek forum, who requested to go by his username, Corgana, I asked whether he thought Reddit should ban any community in which people were discussing the side effects or failings of vaccines. He insisted that the difference between intellectual conversation and active harm should be obvious. “NoNewNormal was not a ‘debate the efficacy of vaccines’ subreddit,” he said. “It was a subreddit for creating propaganda to spread to other parts of the internet.”

So maybe Reddit could just make a list of debunked vaccine rumors, like the ones NoNewNormal was dedicated to amplifying, and take down all of that content. “Reddit could adopt a policy about known misinformation, and then they just have to identify that misinformation and remove it,” Grimmelmann said. But that would be a lot of work, and it would be complicated because of the site’s reliance on volunteer moderators to keep their own communities in line. Reddit had a tool for manually reporting misinformation, which it seems primarily to have used for data analysis. A spokesperson said that most of the reports submitted through it “were not meaningfully actionable,” and that Reddit is now offering a tool to report community “interference” instead. (Another term, basically, for brigading.) More important, Grimmelmann told me, any such policy would be “culturally difficult” for Reddit to adopt, given the site’s history as a place for intellectual and “intellectual” debate. An explicit ban on vaccine misinformation would represent “a serious rethinking of what Reddit is and is for.”

The Reddit CEO’s initial response to his enraged users showed deference to that culture: He implored the community to have “a willingness to understand what others are going through, even when their viewpoint on the pandemic is different from yours.” Yet his post also hinted at the ways that different viewpoints—if they were sufficiently dangerous—could be moderated via other means. “Manipulating or cheating Reddit” to amplify an opinion is against the rules, he wrote, as is “fraud” and “encouraging harm.” As such, Reddit can indeed punish users or communities who advocate for fake vaccine cards or dangerous activities purported to beCOVID-19 cures, such as drinking bleach. The company’s subsequent response to the protest, written by a member of the company’s Safety team, clarified that its rules against impersonation, fraud, and manipulation also cover sharing faked World Health Organization or CDC advice or fabricated medical data. (A spokesperson emphasized to me that these comments did not represent changes to any Reddit policies, or to how those policies are enforced.)

Reddit has executed similar maneuvers in the past to make tricky moderation decisions. In June of 2020, the site banned a subreddit called r/The_Donald, which promoted violent and racist political rhetoric. Because its rules against hate speech were watery and inadequate at the time, the company leaned on the same justification it used this month to remove r/NoNewNormal: r/The_Donald, it said, had engaged in brigading. Similarly, when Reddit removed major QAnon subreddits in 2018, it was not explicitly on account of their wild conspiracy theories and fantasies of mass murder, or other problematic behaviors that upset other users. Rather, they were removed for doxing—publishing a person’s identifying information against their will—which is subject to a strong sitewide norm of disapproval. (Reddit’s content policy was significantly overhauled last summer, and now prohibits a wider range of abusive behaviors.)

[Read: The blue checkmark’s evil cousin]

Social platforms find it useful to moderate content on the basis of those kinds of technicalities, Grimmelmann told me, because they can more easily defend any given enforcement decision as the simple application of a basic rule. A substantial part of Facebook’s efforts to corral anti-vaccine rhetoric, for example, has been acting against “coordinated inauthentic behavior,” which refers to networks of fake accounts, and “coordinated social harm,” which refers to real accounts that have a history of working together to evade punishment for rule breaking. But this approach depends on waiting for bad actors to commit actionable offenses, even as they engage in other harmful behavior. Users of these sites say it gives groups time to grow and recruit users to backup websites or even offline organizations before they’re banned.

Participants in the Reddit protest were hoping for something that requires a little less hairsplitting. They want a clear commitment of some kind that the platform will protect its users from ideas that could kill them. If they don’t really know what that would look like in practice, they know what it doesn’t look like. Many of the moderators were not satisfied with the banning of one egregious forum because of its other bad behavior. (Some even wanted to keep the blackout protest going until the 54 quarantined subreddits were banned as well, but they were voted down.) “A lot of people are disappointed,” Ben, a teacher from Rhode Island who moderates the anti-disinformation subreddit r/ParlerWatch, told me. “That’s kind of just a way for Reddit to de-escalate the situation and stall the protest but also avoid addressing the root issue.”

Leave a Reply

Your email address will not be published.