YouTube New War Against 'Offensive' Content Could Target Free Speech

Bryan Michalek | August 2, 2017
DONATE
Font Size

In a recent update on the site's official blog, YouTube released a statement on its commitment to fight terror content online. The post outlined the video platform's steps to ensure better regulation of content that they deem hateful or conducive to the spread of terrorism or radical fringe ideologies. 

YouTube's parent company, Google, now claims it's targeting dangerous or offensive content in four ways: "[B]etter detection and faster removal driven by machine learning, more experts to alert us to content that needs review, tougher standards for videos that are controversial but do not violate our policies, and more work in the counter-terrorism space."

These moves come amidst reports of several major brands being attached to videos that were considered hateful or linked to terrorist propaganda. In an effort to distance themselves from the content, hundreds of brands staged a boycott of the site, leaving YouTube with the responsibility of fixing the problem.

But while fighting terrorists' use of social media platforms is an admirable goal, YouTube's casting an awfully wide net sure to ensnare more than just ISIS. According to its statement, the site plans to put "controversial religious and supremacist" videos in a limited state. This loophole stops YouTube from banning and removing all videos, but allows them to be hidden from larger audience in a "limited state."

In the blog post, YouTube says:

"We’ll soon be applying tougher treatment to videos that aren’t illegal but have been flagged by users as potential violations of our policies on hate speech and violent extremism. If we find that these videos don’t violate our policies but contain controversial religious or supremacist content, they will be placed in a limited state. The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes."

The announcement caused an uproar from many in the content-creator community who are concerned that videos subjectively flagged as offensive will be placed in a purgatorial state, blocked from promotion or monetization.

The gatekeepers of the service will include human reviewers of content in the form of the "Trusted Flagger" volunteer program, which works with around 15 institutions to look for "hateful" content. The site will also utilize its largely automated review system. 

The "Trusted Flagger" program evolved into what is now known as "YouTube Heroes" which called for the public to moderate content. This program was largely criticized for allowing SJWs the ability to flag content they disagreed as "hate speech," thus eliminating the ability for debate and diversity of opinion.

In addition, YouTube's partnership with the No Hate Speech Movement has drawn concern about what should actually be deemed "hate speech." If the fate of users' content lies within the hands of someone who disagrees with them, the likelihood of that video having success on the platform is heavily limited. 

As YouTube enters the dangerous realm of censorship, many eyes will be fixed on its ability to separate real hateful content from videos that promote diverse ideas and discussions. 

donate