When Pokémon GO swept the world in 2016, no one expected it to become a platform for Nazi youth recruitment. Nonetheless, an alt-right advocate created a targeted youth campaign, with characters bearing swastikas and distributing anti-Semitic fliers in the online universe.
Alt-right blogger Andrew Anglin promoted the campaign, commenting, “I have long thought that we needed to get pre-teens involved in the movement … Anyone who accepts Nazism at the age of 10 or 11 is going to be a Nazi for life.” Targeted youth recruitment by the alt-right goes beyond Pokémon GO. In August, the group published an alt-right children’s book that preaches predominantly anti-Islam sentiments. The alt-right’s website promotes a “handbook for right-wing youth,” demonstrating the savvy online marketing strategy used to draw vulnerable children into the alt-right.
The alt-right’s rise has been accompanied by racial violence, Nazi salutes, and open KKK demonstrations—all under the guise of free speech. The alt-right has used the internet to promote these events, such as the Charlottesville rally that drew supporters from across the country.
In response the alt-right’s use of privately-owned websites to organize and spread their ideology, sites like PayPal and Reddit have taken steps to limit the group’s organizing power on the internet. This has drawn new attention to the role of private actors in restricting online expression. Though the alt-right’s internet presence poses unprecedented challenges to free speech, allowing private actors to become the gatekeepers of acceptable speech online places the fundamental idea of free expression at risk.
What separates the alt-right from Nazism and the KKK is its use of effective internet organization. Historically, radical right-wing movements were limited by geographic borders. Ubiquitous access to the internet has removed those barriers for the alt-right, enabling them to organize across countries and spread propaganda to curated online audiences. These curated targets are crucial to the alt-right movement, given mass public disapproval of its ideological basis. The movement’s predecessors were largely kept at bay by an increasing fear of public backlash as the 20th century progressed towards a more egalitarian society. The relative anonymity of online communities provides a shield from this backlash, permitting the growth of the modern alt-right.
The alt-right capitalized on this reality by speaking the language of the internet and preying upon a vulnerable audience: the youth. In an interview with Newsweek, director of the Anti-Defamation League’s Center on Extremism Oren Segal commented, “Younger people have access to much more forms of information, including propaganda, than ever before in human history—that’s social media … They are able to exist in an online sphere, find like-minded sympathizers, communicate, creating these online subcultures where anonymity is involved and where people can be whatever they want.”
The alt right’s vehement invocation of free speech doctrine distinguishes it from other internet propagandists. This creates a complex ethical and political environment around speech rights. Private and public actors in this climate find themselves questioning the limits of free expression on the internet. Questions surrounding the ethics of limiting the speech of particular groups have yet to receive a definitive answer: does allowing such speech to continue incite violence, or would censorship garner sympathy and support for radical organizations? Is it the role of the private sector to determine what can and cannot be said on the internet?
Managing director of Harvard Law School’s Cyberlaw Clinic Christopher Bavitz said that the internet complicates the issue of free speech. “Most internet speech is now mitigated by private actors, and there are a whole range of private actors,” Bavitz told the HPR. He explained that new attention has been placed on internet service providers potentially blocking offensive content in the wake of Charlottesville, something that he believes “people would be a lot more uncomfortable with” than a platform like Facebook regulating content. As far as the dangers of online regulation, Bavitz noted that it may be best to keep alt-right content in the light, explaining that when censorship occurs, “these beliefs don’t necessarily go away. They just go to deeper, darker corners of the web.”
Though the First Amendment prevents the public sector from limiting most speech, private actors have begun to take action against alt-right activity. Discord, a gaming chat site, recently shut down numerous alt-right chat rooms that became invaluable organizational tools for the alt-right in the past year. But the company’s statement against the alt-right went further. Discord board director Josh Elman told the New York Times, “I believe every communication channel- public or private- has a responsibility to investigate and take action on any reports of misuse including harassment, inciting violence or hate, and other abuse.”
Other private platforms have also ostracized the alt-right. Twitter has partnered with social justice organizations to filter potentially racist or otherwise offensive content, sparking outrage over content curation by specific ideological groups. The alt-right has turned these actions into a recruiting strategy by using them as evidence of the persecution of its membership and violations of their First Amendment rights.
The Case for Free Speech
In an online world that seems increasingly regulated by the day, Americans are left with the timeless question of the limitations of the social contract: how much expression should we be willing to give up in exchange for a more secure society?
The social contract mandates the forfeiture of certain rights to government actors in order to promote a safe, functional society. Today, that contract is formed between citizens and private actors like Facebook, Twitter, and internet service providers. By utilizing private internet platforms to organize and spread dangerous ideologies, the alt-right pushes citizens to contemplate whether those private actors should be able to regulate speech. However, the risks of private regulation far outweigh its benefits.
By allowing this regulation, citizens give small groups of powerful individuals dominion over speech. This control is bound to be swayed by personal political, religious, and economic interests of the private owners. Though censoring the hateful speech of the alt-right seems like an admirable goal, it raises the dangerous possibility of censoring all groups with which powerful companies disagree. Twitter’s partnership with social justice groups reveals the beginnings of this threat. By permitting a group of people with similar ideologies to determine which content is suitable to be seen by the general public, Twitter threatens to disenfranchise some nonviolent political groups.
What Private Actors Can Do
To mitigate the dangers of free speech, private institutions must look to the framework laid out by judicial interpretations to limit speech that incites “imminent illegal activity.” Especially in times of danger, speech must be regulated not by emotion, but by law. Instead of arbitrarily restricting the speech rights of certain groups, private entities must operate in accordance with both U.S. law and the terms set of their contracts in order to protect both free speech and the public.
The unprecedented circumstances of the alt-right pose a new challenge to public society and private actors alike, testing the value of free speech in the face of violence. It is the responsibility of private online content moderators to clearly and contractually establish the limits of speech on their platforms. Likewise, consumers bear the responsibility of weighing whether the terms of service justify the sacrifice of expression. Moving forward, open, public discourse between private actors and consumers is the key to protecting free speech on the internet.
Image Credit: Trey Ratcliff / Flickr