Ask an average person on the street if they know about content moderation and you’ll probably get a lot of people scratching their heads. For something so integral to the online experience, content moderation is a low-key operation that has a great impact to the average consumer.

Content moderation for image, video, or text is the process of screening user-generated content for inappropriate material. Spam, hate speech, violence, pornography, illegal content, and other things can be pre-moderated or post-moderated, depending on the budget and needs of a company.

Pre-moderation is the act of reviewing content before it is posted on a platform. Post-moderation is the review of content after it goes live. Both forms of moderation can be used to clean up the user experience on a site or app.

It’s important to note that moderation is not censorship of negative views towards a company – which can actually work against your own interests. We’ll discuss this in more detail when we go over community management in a future blog post.

If you have a platform with user-generated content, you need to moderate this content.

Why Do You Need Moderation?

If you build it, they will come.

In a perfect world, moderation would be unnecessary. Everyone would adhere to (and read) a company’s Terms of Service and community guidelines.  We wouldn’t need to worry about malicious or mischievous behavior disrupting otherwise harmonious communities.

It’s important to note that any platform, regardless of the audience, will attract unsavory submissions. It’s important to keep an eye out for shocking content because this stuff will inevitably appear. On the bright side, with content moderation, this is why we can have nice things.

Who exactly are the actors that submit bad content? While the list below is not exhaustive, it contains a few of the archetypes that every company should be aware of in order to protect their users:

  • Trolls: Some users like to shock other people for laughs. These trolls submit shocking contents that are pornographic, violent, or illicit. Usually, these trolls are individuals who have too much time on their hands, but there’s an increasing trend of state-sponsored actors who are deployed to sow discord amongst online communities by stoking racial tension, increasing political division, and generally causing chaos with hateful and misleading content.
  • Cyber-Bullies: Sometimes intentional, sometimes unintentional, cyber-bullying can occur on any platform – even between adults.
  • Predators:  Some predators are looking to receive or share provocative photos with minors. In other cases, these predators are looking to actively manipulate minors. This manipulation, or grooming, manipulates people into a position that makes them more likely to trust the predator and feel more isolated from those around them.
  • Scammers: Scammers try to trick other people to receive valuable information such as account access or financial information. In some cases, they may even try to share malware or viruses to gain this information.

In some cases, none of the above archetypes are responsible for inappropriate content. Sometimes it’s people who really just don’t read about a platform’s rules regarding quality standards or what qualifies as inappropriate content.

(Read past that big picture of impressionable youths)

 

Image, Video, and Text Moderation

Who Needs Moderation?

Hint: Nearly Any Online Platform

Any website, blog, app, or platform that has user-generated content (UGC) needs to have a system for moderation in place. These systems can vary depending on the medium. While some platforms need to have a close eye on the content that is submitted, others can get by with a retroactive “light touch” approach to content that is flagged by their users.

Platforms that are built around a constant stream of UGC – where users upload images and videos in addition to comments – usually have a closer eye on submissions. Companies that create most of their own content but allow their users to create profiles or add comments may not need to implement very resource-intensive moderation practices.

Below are some platforms that will likely need good teams in place to moderate content:

  • Children/Teen Platforms: Platforms that are frequented by children and teenagers will experience all of the issues listed above. Younger users are the most vulnerable and can also be the most volatile if left unchecked. Companies really need to closely monitor what is going on to protect their users and preserve a clean user experience and company brand. Moderators can be deployed to review image, video, and text submissions on these platforms and even monitor for more nuanced communication to protect younger users.
  • Dating Apps: These platforms naturally incur more sexualized content. Some dating apps often have a fine line between what is considered acceptable and what is considered pornographic. Since almost all dating apps have an age restriction, moderation is also imperative to identify underage users to avoid any app-store bans or even legal repercussions.
  • Communities: Disparate groups from around the world can form niche communities on websites or apps. These online communities can act as support networks for people that are geographically isolated or otherwise unable to find like-minded friends. Since these communities are focused on communication, moderation is key to ensuring users have a safe and comfortable space to interact and be social.
  • Marketing Companies: Smart marketing companies find unique ways to engage their audience through brand campaigns, social media pushes, competitions and other marketing projects. While interacting with targeted demographics can yield high returns on investments, there is also a higher risk involved. Inappropriate content like pornography, hate speech, and spam can seep into campaigns and detract from their overall purpose. Off-brand submissions (images, videos, or text that includes a competitor’s branding) can also be counterproductive.
  • Aggregators: Websites and apps that focus on aggregated content such as news articles, job postings, or local advertisements are subject to a lot of risk. Inappropriate content, spam, and links to illegal content such as live streams of sports events or torrents can leave a company exposed to undue risk. Community interactions on these sites also need to be monitored. In some cases, community moderators can do the job well, but a professional moderation team is also needed to investigate trends and ensure that the content on these sites meets community guidelines and quality requirements.
(Read on to know more about how to approach moderation.)
Outsource your Content Moderation

Do You Need to Create Your Own Moderation Tool?

This really depends on what you need. You might want to ask yourself a couple important questions:

Do you have the engineering resources to create and maintain your own tool?

  • If you have the engineering resources, you’ll have more control over how the tool functions and what you’ll be able to do with it. If you do not have the resources, you can always integrate with the API of a vendor like Process Ninjas to outsource your moderation work.

How complicated are the moderation workflows?

  • In some cases you may want your team to look deeper into the context of submissions and history of a user. If you have to have deep integration with a multiple aspects of your platform, you may want to create your own tool. In some cases, you might be able to send over content to your outsourcing partner to have your content moderated.

How Should You Moderate Your Content?

Location: In-House or Vendor?

Some companies choose to deploy in-house teams to moderate their user generated content – this may make sense if volumes are low or if the process is extremely complicated. Individual members of a team may be able to moderate content while they handle other tasks at the office. The downside to this is that the turnaround time may not be as quick as you need and the monitoring of the work may be incomplete due to resource constraints. Many companies partner up with outsourcing companies and establish clear SLAs to adhere to. This provides an opportunity to leverage the expertise of an outsourcing partner to scale a team – partnering up with an outsourcing partner can also keep costs down.

Method: Automated, Human, or Hybrid?

In some cases, AI can also serve as a useful tool for filtering content, but it still comes with a margin for error that may be unacceptable as the only solution. Almost all companies that deploy an AI tool will still use human moderators.

Time: Before or After Content Goes Live?

In some cases, you may need to invest in pre-moderation so you can be sure that users are not exposed to negative content. Companies will usually need to deploy agents 24/7 to ensure content is posted on their platforms within a reasonable amount of time. Users who have to wait 24 hours for their photos and videos to go live may not be happy to encounter this delay. 24/7 operations with quick response times – from five minutes to thirty seconds – ensure that content is reviewed efficiently and goes live quickly.

In other cases, post-moderation can serve a company well enough for their needs. For companies that have backlogs that they want to review and clean up, the work can be done during daytime operating hours and with increased response times. Some platforms that have high-volume submissions of images, videos, or text can deploy a team for post-moderation only after content is flagged by other users. In this cases, users will invariably see negative submissions, but quick responses to user complaints can show that their voices have been heard.

Where Do You Begin?

Guidelines act as the foundation of the moderation process. It’s important to note that while public community guidelines that users are instructed to follow can serve as a good foundation to the process, moderation guidelines serve as the bible for agents that are tasked with reviewing content. The more objective the process is, the better the workflow.

You can adapt general guidelines from a vendor like Process Ninjas to expedite the development of your guidelines.

Once you have your guidelines established, you can create a tagging sheet that contains samples of content that moderators may encounter. On the tagging sheet, moderators (either in-house or from the vendor) can mark their decisions and add notes for review. Then the teams can calibrate on whether these decisions are correct or not.

Once moderators are aligned with the moderation guidelines, operations can begin.

Is there anything we missed? Shoot us a message if you have any questions or comments about moderating your content.

Press enter or esc to cancel