Over the past few weeks, enormous multinational brands including AT&T and Johnson & Johnson have pulled millions of ad dollars from YouTube after they discovered that algorithms had placed these ads next to racist videos and objectionable content.

On YouTube, ad revenue is shared with content creators, and algorithms place highly-targeted ads on videos appealing to specific audience. Over the past months, these algorithms have supposedly placed Verizon ads on videos by a potentially criminal Egyptian cleric and an individual who preaches violence in Pakistan. Similarly, ads from Coca Cola, P&G and Walmart were placed next to racist and anti-semitic content. These content creators were paid through YouTube’s revenue-sharing model.

The media has talked about why this is a moral problem. Let’s discuss why it is also a technology problem. It would be impossible for Google to manually review the over 300 hours of video that are uploaded to Youtube every minute, in hundreds of languages. So in order to keep ad dollars from funding hate speech, Google will need to develop algorithms that can figure out what content might be offensive and either filter that content out or not show ads on them.

This is a daunting task. While image-processing algorithms could quickly identify a gun, it’s much more difficult to distinguish a gun in a music video or a movie from one in a terrorist video. And natural language processing algorithms would need to detect nuanced context to make these same distinctions between art and hate-speech…

This is an excerpt. Read the full article here.

Share.

About Author