Friday, March 26, 2021

The Christchurch Report Points to Better Avenues for Internet Reform

YouTube's website (https://www.piqsels.com/en/public-domain-photo-jbvfe/Creative Commons) Two years ago last week, a white supremacist walked into two mosques in Christchurch, New Zealand and began spraying bullets upon worshippers, killing 51 people. The attack brought into public view a particular brand of 21st century extremism, making it tough to ignore the enduring strength of white extremist ideas, and the ineffectiveness of Western governments in stamping them out. And the shooter—whose manifesto overflowed with internet jokes, and who livestreamed the shooting “ as if it were a first-person shooter video game ”—forced the general public to come face-to-face with an idea long well-established among extremism researchers: There’s a whole lot of dangerous stuff online, and it will seep out of the four corners of the internet and into the real world.  The most comprehensive look at the shooter, the attack and the bureaucratic failures that preceded it comes from a 792-page report released by the New Zealand government in December 2020. The document charts the brushes the shooter had with Kiwi and Australian authorities, pieces together his financial situation, and chronicles the attack itself. But its most valuable contribution is perhaps that it stitches together, in great detail and for public viewing, the shooter’s online activities in the months and years that preceded the shooting. The report paints a picture of a man enthralled by extremist corners of the internet. He posted in far-right Facebook groups. He inhaled white supremacist YouTube videos. He donated money to his favorite racist internet celebrities. He read manifestos. He wrote one of his own. A lot has changed in two years. YouTube has become relatively more aggressive in trying to prevent a growing collection of racist creators from making money directly on the platform and now claims it removes five times as many “hate videos” as it did at the time of the attack. Facebook has more rules . The New York Times made a whole podcast series about the YouTube “Rabbit Hole”—people start with videos about manliness or political correctness and end on a treadmill of videos about how white people are being replaced by minorities. But reading through the Christchurch report, what’s most striking is just how many of the problems it documents continue to vex governments and platforms. While some right-wing influencers have been demonetized on YouTube or kicked off altogether, many others remain, and the presence of such content remains a large problem. Facebook has made a fuss about its Oversight Board and some new rules for platform content, but Facebook groups remain a breeding ground for hate-filled vitriol, extremism and even celebration of violence. Above all, critics’ focus on the big players leads to far less scrutiny of the smaller internet companies and the ones that don’t deal exclusively with user-posted speech. Extremists and their ideas migrate from platform to platform, implicating not just the social media platforms in content moderation but file-sharing websites and payment processors as well. Facebook Groups and Beyond The report retreads familiar ground for people who have tracked the content moderation space: Facebook, specifically Facebook’s Groups feature, played an important role in incubating the individual’s extreme ideas. The individual’s foray into extremist internet communities began well before his use of Facebook; according to the report, “In 2017, he told his mother that he had started using the 4chan internet message board”—a site notorious for hosting hate speech—“when he was 14 years old.” He would also play video games and “openly express racist and far right views” in chats during those sessions. Yet Facebook was a key part of the individual’s online activity and the report’s account of his Facebook activity charts a trail of overt racism and Islamophobia. He was one of more than 120,000 followers of the Facebook page for the United Patriots Front, a far-right Australian group, and commented about thirty times on the page between April 2016 and early 2017. He praised the United Patriots Front’s leader on the organization’s page, as well as the leader of the True Blue Crew, another far-right group in Australia. He used Facebook to direct-message a threat to an Australian critic of the former (allegedly reported to authorities with seemingly no action taken). One message to this person read, “I hope one day you meet the rope.” After Facebook removed the United Patriots Front group in May 2017, members of the United Patriots Front created another far-right group, The Lads Society, to which the individual was invited. Though he declined to join the actual real-world club, he joined and became an active member of The Lads Society’s Facebook group. The pattern is bleakly familiar: Facebook axes one group, and the group members just create a new one. Over the next several months, according to the report, the individual became “an active contributor [to the new group], posting on topics related to issues occurring in Europe, New Zealand and his own life, far right memes, media articles, YouTube links (many of which have since been removed for breaching YouTube’s content agreements), and posts about people who were either for or against his views.” He also encouraged donations to a far-right politician and cautioned that “[o]ur greatest threat is the non-violent, high fertility, high social cohesion immigrants...without violence I am not certain if there will be any victory possible at all.” Later, in the months before the 2019 terrorist attack, the individual posted links to extremist content on Facebook and Twitter. He also made a Facebook album titled “Open in case of Saracens,” which included two videos with extreme right-wing views and calls for violence—and a digitally altered image that depicted Masjid an-Nur, one of the ultimate targets in the terrorist attack, on fire. Facebook Groups in many cases remain cesspools of hate and extremism, and even outright incitements and plotting of violence. The Jan. 6 terror attack on the United States Capitol was in part planned in Facebook Groups; immediately following the November 2020 election, Facebook removed a several-hundred-thousand-member-strong “Stop the Steal” group only to let hundreds of others grow in the following weeks. Content on the platform’s News Feed itself, as well as in user photo albums and on users’ “walls,” likewise remains a problem vis-à-vis hate and extremism. Facebook, meanwhile, continues issuing vaguely worded rules about the acceptable parameters of group behavior. Online Fandom Backlash after the release of the report centered on one internet platform in particular: YouTube. The report itself made clear that the video-sharing platform played a central role in motivating the shooter, writing “The individual claimed that he was not a frequent commenter on extreme right-wing sites and that YouTube was, for him, a far more significant source of information and inspiration.” And after the report’s release, extremism researchers and reporters took YouTube to task on familiar grounds: its lax content rules, its aversion to transparency and, of course, its algorithm. YouTube has taken some steps in the right direction on these matters, but is nowhere near patching up all the problems.  One of the report’s biggest contributions to the public record about extremism on YouTube and far-right online culture mostly fell under the radar in the weeks that followed the report’s release. The report details that the shooter didn’t just passively watch a lot of bad YouTube videos, he attached himself to a collection of far-right internet sensations. Prominent racists never limited themselves just to YouTube, but the platform has provided particularly fertile ground for white nationalist “ microcelebrities ” to lap up devoted fans. Stanford researcher Becca Lewis wrote a 2018 Data & Society report documenting the phenomenon. She argues that YouTube is an ideal fit for “niche celebrities who are well-known within specific communities.” This helps knitting YouTubers or ankle rehabilitation YouTubers to gain loyal followings, but it also boosts George Soros conspiracy YouTubers or “ scientific racism ” YouTubers. The platform offers certain accounts direct monetary support and the (often very long) video format lets high-profile others “develop highly intimate and transparent relationships with their audiences.” YouTube influencers who make money from makeup or branded exercise equipment can leverage user-to-poster relationships to sell cosmetics or jump ropes; white nationalist influencers can simila
The Christchurch Report Points to Better Avenues for Internet Reform posted first on http://realempcol.tumblr.com/rss

No comments:

Post a Comment