The Logan Paul Incident Illustrates Why YouTube Will Never Be 100% Brand-Safe

1月 15, 2019

Just a couple of days into the new year, YouTube already has another brand safety crisis on its hands.

, a YouTube star with 15 million followers who has inked marketing deals with Walmart and Dunkin' Donuts, uploaded a video over New Year's weekend in which he visited Japan's Aokigahara, a forest known as a destination for people who take their own lives.

Then Paul, 22, stumbled on a dead body and seemed to make light of the incident. After complaints, Paul himself took the video down and issued an apology.

For YouTube, the incident came after a year in which advertisers reckoned with the fact that the video network has no way to police its content. Although YouTube has made some strides toward making its content more brand-safe, this latest oversight illustrates why the site will never be 100 percent brand-safe. Here's why:

More human moderators probably wouldn't have prevented this:Even though YouTube announced in December that it was hiring 10,000 human moderators, that's no guarantee of protection, since a human moderator approved the Logan Paul video, according to a report in The Telegraph. (YouTube has declined comment on whether a human moderator approved it.) According to the report, even though the video had been flagged for its objectionable content, a moderator who reviewed it decided to let it stay up. A YouTube "Trusted Flagger” also said on Twitter that YouTube's human moderator decided to keep the video up. YouTube's instructions to moderators are often confusing, and moderators focus on relevance to search results rather than objectionable content, according to a report by BuzzFeed.

Logan Paul's video was reported and YouTube manually reviewed it; they decided to leave it up without even an age restriction... people who have re-uploaded it since have received strikes for graphic content. Ridiculous.

YouTube's algorithms are hit and miss: Bad actors have figured out ways to game YouTube's filtering algorithms, or the algorithms have missed their targets. That doesn't bode well for YouTube's artificial-intelligence-driven approach. Nor does the fact that YouTube's AI system uses data from human moderators whose analysis of content is often lacking. In November, YouTube's success rate for screening extremist-related content using AI was only 75 percent. There's way too much content to police: Users upload some 400 hours of video to YouTube every minute. To screen such content, YouTube would need to hire millions of people. A flawed algorithm won't do the job.

Rather than aim for 100 percent brand safety, YouTube might consider something else: transparency.

Since there is no way for YouTube to prevent similar things from happening in the future, the best approach is for the company to be more transparent and offer edit logs, a la Wikipedia. That way, the company can at least illustrate that a human screener failed or that the company's rules aren't stringent enough.

Marketers, meanwhile, should calculate a certain amount of brand risk into every YouTube buy and realize that when you give a young person with an inflated ego a camera, they are apt to do or say something stupid.