Back to Blog

How Does Content Moderation Using AI work?

Content moderation 2

With the advent of the internet, it has never been easier than it is now to publish content. Anyone with a smart device and a web-connection can publish on a multitude of platforms catered for that specific purpose. In recent years, a global debate has emerged around the risks faced by internet users, with a specific focus on protecting users from harmful content. A key element of this debate has centered on the role of content moderation to protect users from potentially harmful material. 

What is content moderation?

Content moderation is a very common practice across online platforms that are dependent on user-generated content to drive their growth. These can include social media platforms, e-commerce marketplaces, online forums, media aggregators, and platforms with user communities.

Publishing un-moderated user-generated content comes with its risks. This content could contain irrelevant, obscene, illegal, fraudulent, or otherwise inappropriate matter that is unsuitable for public viewing. Scalable content moderation processes are required to manage and publish high volumes of user-generated content on these platforms. Content moderation is necessary for these platforms to protect their users and brand reputation.

Manual content moderation

Tech giants such as Google, YouTube, and Facebook outsource their content moderation work to companies that employ large amounts of human moderators for this job. These moderators screen content based on platform-specific rules and guidelines. According to a report by NYU Stern, Facebook content moderators review flagged content 3 million times a day. This in itself is a difficult task and the numbers increase every day. CEO Mark Zuckerberg also admitted that the moderators “make the wrong call in more than one out of every 10 cases” while reviewing flagged content. Content moderation also causes post-traumatic stress disorder in moderators who review “violent extremist” content.

Content moderation using AI

Artificial Intelligence is making great strides in digital content moderation, bringing a level of objectivity and consistency not matched by manual content moderation. Driven by machine learning, it optimizes the moderation process, using algorithms to learn from existing data and making sophisticated review decisions for user-generated content.

AI can be used to optimize the common online content moderation workflows utilized by many organizations:

  1. Pre-moderation - when the uploaded content is moderated before publication, typically using automated systems
  2. Post or reactive moderation - when content is moderated after it has been published and it has been flagged by other users or automated processes as potentially harmful, or which was removed previously but requires a second review upon appeal

Content-based moderation

This involves analyzing the material that has been posted on its own merits without considering the wider context. For example, this may include looking for entities within an image or a piece of text.

Text Moderation

Interpreting and understanding language within text content is challenging.

Natural language processing (NLP) is the term used for techniques for understanding text content and AI is contributing to advances in this field. In the moderation of online content, NLP is used to process written text and can be applied to speech by using speech-to-text techniques. 

There are many NLP techniques that can be used in the moderation of text-based online content. Some of them include named entity recognition (NER), which can be used to automatically identify harmful content from text-based content. This text can form the basis of harmful content such as terrorist propaganda, harassment, bullying, ‘fake news’, and hate speech.

Sentiment analysis is another technique used to classify portions of text, labeling them as positive or negative, to more subtle labeling including the level of emotion.

Image and Video Moderation

Object detection and semantic segmentation are computer vision technologies that enable machines to detect and identify harmful objects and their location within an image. These technologies use image processing techniques to identify regions of an image or video and associate this with a predefined class. Additionally, optical character recognition (OCR) can be used to identify and transcribe text within images.

These techniques are capable of inferring object information from images. They can be used to identify weapons, body parts, faces, and text within images.

In the case of videos, computer vision allows the understanding of frames in a video relative to the preceding frames and is important for true video understanding. This sequential understanding is useful for any time-series data which requires an understanding of the sequence itself, in addition to its frames.

Context-based moderation

Context-based moderation uses the context surrounding content to analyze its harmfulness. The context can be used in addition to the content itself to indicate how harmful that content may be. In many cases, the context provides an essential element in determining the intent of the content.

Context can cover a broad range of entities such as requiring historical or geographical information (country-specific, or specific to even smaller areas), it can depend on gender, sexuality, age, religion, race, and language. Context can also change over time, such as referencing the latest news stories or using the latest slang, or even the most recent post by another user.

Advantages of content moderation using AI

Improve the effectiveness of human moderation

Once the AI system analyzes and assesses a piece of content, it can provide ‘scores’ which display its level of confidence that a particular piece of content falls into a category. If the confidence levels are sufficiently high, the content may be automatically removed or sent to a human for manual review. This improves the effectiveness of the human moderation team by allowing them to prioritize their workflow. Prioritizing content for manual review would ensure that the most damaging content is reviewed more urgently, to help limit the exposure of harmful content to internet users before it is reviewed and taken down.

Reduce the exposure of harmful content to moderators

AI could use the scoring system to allocate content to moderators based on what they have recently been exposed to. The uncertainty of the AI, reflected by the lower scores, would indicate that the content is less harmful but might still require moderation. AI techniques can be used to protect content moderators during manual review by blurring the most harmful areas of flagged content. If further information is required, the harmful areas are gradually revealed until sufficient evidence is visible to determine if the content should be removed or not.

What is the Right Content Moderation Strategy?

Content moderation strategy for organizations depends upon their goals, audience, and type of content that is associated with their platform. 

Skyl.ai provides expert consultation on content moderation implementation and strategy across various verticals. Check out the content moderation solutions that can be built using the Skyl.ai platform here. 

    

Comments