Google is looking to beef up its crackdown on hateful, offensive, and violent content on YouTube by hiring more than 10,000 people to monitor content on the site.
In a blog post on Monday, YouTube CEO Susan Wojcicki wrote that YouTube has been exploited by “bad actors” to “mislead, manipulate, harass or even harm.”
Wojcicki wrote that while YouTube has been successful protecting against “violent or extremist content” with their new computer algorithms and staff, it is now moving towards getting rid of other “problematic content” on the site.
One of Wojcicki’s proposals to try to make YouTube “safer” is to hire over 10,000 people with the purpose of policing content:
We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
Wojcicki also wrote that YouTube will also employ “aggressive” monitoring of the comments section of videos if they are deemed out of hand:
We are also taking aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether.
Wojcicki promises that the crackdowns on speech will be transparent by releasing reports about what content ultimately gets removed or censored:
We understand that people want a clearer view of how we’re tackling problematic content. Our Community Guidelines give users notice about what we do not allow on our platforms and we want to share more information about how these are enforced. That’s why in 2018 we will be creating a regular report where we will provide more aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies. We are looking into developing additional tools to help bring even more transparency around flagged content.
According to the YouTube policies website, things that violate policies include nudity, violence, spam, “harassment and cyberbullying,” and “incitement to hate content.”
YouTube defines “incitement to hate” content as any content that has the “main purpose is to incite hatred or violence against people or groups based on certain attributes,” including race, religion, gender, sexuality, and “condition of ex-war fighter.”
In August, YouTube announced it would be suppressing videos with “offensive viewpoints” by showing a “warning” before allowing viewers to see the content. The videos also would not qualify for advertising and would be difficult to find using the site’s search feature.
It will be interesting to find out which videos YouTube does ultimately decide to enforce their policies against—whether they’re videos that promote dangerous ideologies or just videos that many people claim promote dangerous ideologies.
Please support MRCTV today! (a 501c3 non-profit production of the Media Research Center)DONATE