YouTube said Wednesday that the platform will begin banning videos promoting Nazi ideology as well as those denying “well-documented violent events” such as the Holocaust or the Sandy Hook massacre.
“Today, we’re taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status,” the company wrote in a blog post.
The company noted that “it will take time for our systems to fully ramp up.”
The video platform has been under pressure from experts and watchdog groups for years to address its role in hosting and spreading hateful content, as researchers showed that its algorithm tended to drag users from mainstream videos toward extremist political content through the use of its autoplay feature and recommendation system.
YouTube said its new policy would aim to “prevent our platform from being used to incite hatred, harassment, discrimination and violence.”
YouTube will also begin to change the videos it recommends alongside “borderline content” that does not violate the company’s policies.
“In January, we piloted an update of our systems in the U.S. to limit recommendations of borderline content and harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat,” the company wrote. “We’re looking to bring this updated system to more countries by the end of 2019.”
YouTube will also include videos from “authoritative sources” alongside borderline videos.
The video-sharing platform has been criticized by researchers who argued that the company wasn’t simply hosting the videos but also leading users down “rabbit holes” toward increasingly violent and extreme viewpoints while shutting out dissent.
Jonas Kaiser, an assistant researcher at Harvard University, and Adrian Rauchfleisch, an assistant professor at the National Taiwan University, outlined the phenomenon last year in a research paper titled “Unite the Right? How YouTube’s Recommendation Algorithm Connects The U.S. Far-Right.”
“In much the same way that the ‘Unite the Right’ rally in Charlottesville, where a white supremacist injured many and killed Heather Heyer by driving a car into a crowd of counterprotesters, sought to bring together many far-right influencers, so too does YouTube’s recommendation algorithm bring together far-right channels,” Kaiser and Rauchfleisch wrote.
YouTube said it would also work to hone its algorithm to stop directing users toward silos of extremist content.
“If a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the ‘watch next’ panel,” Shadloo wrote.
YouTube also said it will strengthen its enforcement of its rules around which channels can monetize their videos.
“Channels that repeatedly brush up against our hate speech policies will be suspended from the YouTube Partner program, meaning they can’t run ads on their channel or use other monetization features like Super Chat,” YouTube wrote.
YouTube’s changes add to similar moves by other platforms that have begun to rewrite policies around hate speech. In March, Facebook announced a ban on “praise, support and representation of white nationalism and white separatism on Facebook and Instagram.”
Twitter does not have a similar policy against white nationalism, but it does ban targeted harassment based on race.
YouTube’s announcement comes just hours after the company said that it would not take action against far-right YouTube personality Steven Crowder, who has published numerous videos targeting Vox reporter Carlos Maza with anti-gay and anti-Mexican slurs over several years.
Despite rules on the service that prohibit “racial, ethnic, religious, or other slurs where the primary purpose is to promote hatred” and “stereotypes that incite or promote hatred,” YouTube declined to take action on the videos created by Crowder, whose videos have received more than 833 million total views on YouTube.
“Our teams spent the last few days conducting an in-depth review of the videos flagged to us, and while we found language that was clearly hurtful, the videos as posted don’t violate our policies,” a YouTube spokesperson tweeted to Maza on Tuesday night.
Becca Lewis, a researcher at Data & Society, a nonprofit based in New York, and author of Alternative Influence: Broadcasting the Reactionary Right on YouTube about networks of far-right influencers on YouTube, called the move “a step in the right direction” toward eliminating hate speech on the platform but was ultimately skeptical.
“It remains to be seen exactly how it will be enforced. And it’s particularly strange to see this coming out right after such a public case of them interpreting their policies in favor of keeping harassment up on their platform,” Lewis said.
Both Lewis and Megan Squire, an Elon University computer science professor who studies the far right, said they see the new policies are an acknowledgement of years of academic research that showed ties between YouTube’s algorithm and domestic extremism.
“This ban is certainly an admission that they have had and continue to have extremist and hateful content on their platform,” Squire said.