Formulating and building tools to make our lives easier is a practice that makes us who we are. We’ve always been finding ways to take shortcuts and streamline the way we do things. Whether we try to adjust processes to maximize productivity or if we create tools that do the work more effectively, this type of progress is going to be expected with almost every industry.

 

Automation has been changing the landscape of industries around the world for a long time now. Initially, technological innovations have allowed people (and companies) to devote less manpower to doing many tasks. Today, new software tools are making it easier for companies to get more work done with less people. We’re able to hand over a lot more complex tasks to AI and machine learning tools. In many cases, these solutions are quicker and less costly than designating them to teams of human agents.

 

Since computers are getting better at a lot of tasks like image recognition and sentiment analysis, do humans still have a role to play when it comes to content moderation and social media management?

 

Eyes of the Machines

Get it? We like puns around here.

 

First off, let’s take a look at image moderation. Image recognition tools are getting much better at identifying a lot of different things – particularly things that easily stand out. Many of the big-name players that have a lot of resources to create and develop these technologies have created their own solutions for image recognition. These systems are trained by inputting massive volumes of data that are curated by humans. If you haven’t noticed yet, the captchas that require you to identify stoplights, bicycles, crosswalks, and other items are designed to crowdsource the training of machines. It’s a bit funny to know that a lot of the items you are asked to identify are probably going to be easy tasks for AIs in the future.

 

These AI tools are now able to identify a growing list of items and they are getting quite precise at their tasks. While they are pretty great after they have been well trained, they are not infallible. For demonstrative purposes, we’re going to use a very simple example below to show you what we mean.  

 

Take a look at the photos below:

For a human, the above images are all quite easy to identify if we are looking for cats. Whether they are real or animated, we know what a cat looks like. In many cases, an AI can also do this task for up to 98% of the images that it encounters. While 98% is a great level of accuracy from where we were with machine learning tools even five years ago. However, when working on a platform that processes hundreds of thousands to millions of images, a two percent margin for error in moderation can still let a lot of submissions get through that might not be acceptable to have on a platform.

 

Let’s go back to the cats – and in this case, think of these cats as a very child-friendly example of content that you wouldn’t want submitted onto your platform.  If you take a look at the pictures above, you can easily identify that there are cats in every image. In some cases, AI may also be able to identify the cats in the images. In other cases, they make mistakes by only identifying the moon or losing the image in a pattern that doesn’t fit within what it was trained to view.  Considering the very standard qualities of a cat, these should be relatively easy to identify. Now what about negative content like gore, nudity, racism, hateful posts, or spam?

 

Right now, machine learning tools are getting better at identifying a wider range of photos. These tools are becoming great items to add to your arsenal for moderating images. However, if you’re working on a platform that ingests millions of images every month, a threshold of 1-2% where the tool will not be able to sufficiently make a decision, you’ll still need another layer of moderation to ensure that your platform is protected.

 

Now if you are a platform that wants to focus on quality of submissions and curation of the images that are posted onto your platform – you’re still out of luck with automated moderation. When it comes to objective identification of quality, this is still something that can only be done by humans.

 

But is the moderation of images and (to an extent) videos the only things where you still need to deploy humans to review them? Surely text is something that can be completely automated.

Do Androids Read of Electric Sheep?

Until the robots pun-up us, we still reign supreme.

 

Surely text moderation is something that humans don’t need to do anymore.  All you need is a filter for bad words and you’re all set, right? The thing with text based moderation is that it can actually be more of a challenge than image moderation. Blocklists that contain keywords are useful, but they aren’t foolproof.  

 

Moderation of text is also important process to protect your users and your brand, and automated moderation is great in catching a good number of expletives and some spam but it works in a very narrow range in order to review content. More nuanced review of content like harassment, bullying, hate speech, and calculated manipulation is still out of the reach of

In many cases, automated text-based moderation is a great first step to maintain civil discussion and reduce foul language and noise from bots.

 

It’s important to understand that automated blocklists are not completely perfect. People that intentionally spam, troll, or manipulate others are skilled at going around filters by using irregular language, spacing, and spelling. The blocklists that are provided on top social media channels only provide a limited level of automated moderation. This becomes even more complicated when submissions are posted in multiple languages since words can have different meanings in multiple languages.

 

You can deploy people on the big name platforms like Facebook, Linkedin, Youtube, and other social media sites or on any of your own social media platform like a forum, comments section, or app. When you work with an internal team or an outsourcing vendor to deploy human moderators to review text, they’ll be able to take a more nuanced view of all of the submissions that are posted on your pages. These human moderators will have a deeper understanding of your community guidelines and policies so they can take a closer look and understand the full context of interactions between users.

More importantly, you can deploy human moderators to actually respond to your community – whether they are customers, prospects, fans, or otherwise. These agents won’t just moderate the content, they’ll be able to respond to any inquiries, alleviate tension, and escalate issues that arise.

This is an important point – text-based moderation is only rudimentary filter. If you deploy a good team of moderators to help you with your community management and social media moderation, you’ll be able to deploy them to facilitate conversations and stimulate your audience. They’ll be more interactive and hold more interest with you and your brand. If you’re able to maintain this interest, it will greatly improve the value of your social media channels and your brand name.

 

There’s one more thing that’s important to mention regarding human moderation with regards to text. Destructive practices like hate speech, radicalization, and misinformation are things that require a deft human touch to review. If you genuinely want to protect your audience from all angles, you really should consider deploying human moderators to assist in reviewing the submissions with an understanding of context, impact, and the wider ranging impact that certain discussions have on your audience, your brand, and the world.

 

Another added value to deploying capable moderators to review your content is if you have a platform that allows for the posting or sharing of videos. These moderators would be able to not only review comments on the videos, but also pick up audio that is being said by the posters to make sure that they aren’t promoting misinformation, encouraging harmful behavior, or just generally spouting expletives left and right.

 

Just like text-based moderation, audio moderation is still something best left for humans to review. You’ll be in better hands by having capable human moderators to keep an eye on your platform.

Okay Computer, Let’s Work Together

There’s still a long road ahead before dey take our jobs

 

In terms of our technological development of artificial intelligence, we are making great strides. At this point, we are in the age of “artificial narrow intelligence”, “narrow AI”, or “weak AI”. These terms are used to describe tools that are capable of doing singular tasks within a very limited context. At this time, many moderation tasks are able to do a semi-decent job if you are just looking for detection of basic things. Go deeper into moderating content, and then you have difficulties detecting for fraud or catching skillfully created content that was developed to dupe AI moderators.

 

At this time, we’ll still need to work in conjunction with narrow AI to be able to properly moderate and review content. Judging by the headaches that big tech giants are facing right now, the use of automated tools on a lot of platforms may be the cause of a lot of some of our current woes.

 

Taking a look at manipulation from domestic and foreign governments, illegitimate abuse of copyright claims from media companies, and algorithmically generated echo-chambers (i.e. “next video you might like” or “groups you should join” to facilitate anti-vaxxers and flat-earthers), it looks like we are still a distance off from striking that appropriate balance between using automated technologies and human-based solutions.

 

We’re still a way off from “Artificial General Intelligence” aka “human-level AI” or “strong AI”. This is, you guessed it, artificial intelligence that would be capable of understanding and reasoning in the same way that a human would. At this point we’ll be entering into territory that will be wholly foreign to pretty much anyone. It’s difficult to say how long it will take for humanity to get to this point or what will happen once we do.

 

Stephen Hawking thought that this might spell the end of humanity when we do create strong AI. We’ve seen plenty of movies, TV shows, and books that have played out doomsday scenarios about how the robots will not only come to take our (moderation) jobs, but or lives. Some people, on the other hand, have a brighter view of our future artificial companions.

 

Hopefully, the future holds that artificial intelligence will be a beneficial accompaniment to our current lives. We might get to the point where we won’t need to work at all – or where we have our new artificial intelligence workforce creating their own worker unions. We don’t really need to get into the philosophy of this stuff in an article about moderation but the ultimate takeaway is that our current tools, while great, are not a substitute for human deployment. It’s important to avoid getting swept into the rush of deploying technical solutions before identifying whether you should really be working to deploy humans to help out with your tasks.

 

For now, you can either deploy your own talent to work on moderating your platforms or you can partner up with an outsourcing vendor to help you moderate your platform, protect your brand, and make sure the user-generated content that is posted is something that you would be proud to have on your platform.

 

If you are interested in looking for an outsourcing vendor, you can always reach out to the team here at Process Ninjas to see how we can help you out. We can help out with a lot of different tasks and we are more than happy to help you by lending our advice, even if you don’t choose us as a vendor.

Press enter or esc to cancel