As the world’s largest video platform, YouTube has a responsibility to police its network to ensure that it doesn’t host videos that violate its community guidelines – meaning no sexually explicit, hateful, or gratuitously violent content.
It’s made plenty of mistakes in its effort to do so, including removing legit videos, monetizing channels promoting pedophilia and Nazis, accidentally blocking numerous alt-right channels, and allowing its search engine to autosuggest some disturbing queries. Now, it’s revealed some interesting numbers that illustrate just how much crap it has to deal with.
In its first-ever quarterly Community Guidelines enforcement report, the company noted that it removed some 8.3 million videos that violated its terms of service between October and December 2017 – of which some 6.7 million were flagged automatically by its bots, and 75 percent of those were removed before they racked up a single view.
That’s heartening to know: as numerous stories dating back to 2012 indicate, content moderation jobs at companies like YouTube and Facebook, which require staffers to watch scores of flagged videos which often contain horrific content can crush employees’ souls. The more of that that we can entrust to machines, the better.