YouTube is making major changes after being exposed to distributing content targeted towards kids on the video platform. It was first reported in October that weird, creepy and inappropriate videos were slipping through the filters on YouTube Kids and since minors have the tendency to watch a series of algorithmically suggested videos, it leads to featuring of worst content for them.
Creators who target kids or used them in their videos will have a hard time getting views and monetizing from them. Youtube will also disable all comments on videos of minors.
The Google-owned company released a blog post on Wednesday addressing to these problems. In its blog, YouTube mentioned the updated rules for kid-related content.
- Application of Guidelines and enforcement through technology: YouTube is focusing on eliminating inappropriate content for kids. In the last week, the video sharing platform terminated over 50 channels and removed thousands of videos under these guidelines.
The video platform will be applying machine learning and automated tools to find and escalate for human reviews.
- Removal of inappropriate ads: YouTube updated its advertiser-friendly guideline making it clear that they will remove ads from any content related to characters engaged in violence and/or offensive actions, even if done for comedic purposes.
According to YouTube, Ads have been removed from 3M videos under this policy and remove ads from another 500K violative video.
- Blocking inappropriate comments: YouTube will disable comments on videos of minors. YouTube revealed that it will be using a combination of automation and manual human flagging and review to remove all inappropriate comments or videos featuring minors.
- Guidelines to create family friendly content: YouTube will release a complete guideline on how creators can make quality content for the Kids app.
- Engaging and learning from experts: The policy is a change for the parents who are worried about the content their kids may see on a user-generated platform like YouTube it appears that the new policy will still rely heavily on algorithms, and on someone spotting the problem content first.