Content moderation is classically defined as “the process of ensuring user-generated content that must uphold the platform-specific rules and guidelines while establishing its suitability for publishing.” This can include from images, ads, and text-based content to forums, videos, social media pages, websites, and online communities. Overall, the goal of content moderation is to uphold your brand’s reputation and thereby the credibility for consumers and fellow business partners. In today’s social media-driven world, one wrong review can easily go viral depending on how scathing. Conversely, feel-good feedback can assuage other potential customers to give your product a chance. With so much on the line, it is imperative that your business has a dedicated content moderation team.
Different channels of Content Moderation
Broadly speaking, content moderation is about helping businesses improve their user experience (UX) by monitoring a brand’s feedback and reputation. Depending on what sector your business falls in, there may be varying standards and requirements to achieve this. That said, there are several steps a business can take to ensure content moderation is being taken seriously. Let’s break down these types of moderation:
Pre-moderation is what it sounds like: the screening of content before it goes live. This is when several members of your team should employ a sort of “checks and balances” to screen and make sure controversial content is avoided. This is also when language should make sure that it aligns with the overall brand strategy and standards.
Post-moderation refers to the process of continually reviewing content once it is live on the site. This should be done weekly and as news cycles evolve (and does society) it should be constantly updated and “refreshed” to stay relevant and appropriate
3. Reactive Moderation
On all social media, there is a report option so that users have the freedom to report any content that they feel may be inappropriate and different criteria as to why they are reporting said content. Reactive moderation is thereby the moderation that is dependent on this model of social media users to reports. Reactive moderation also entails that your company’s moderators will take appropriate action after receiving this external feedback
4. Distributed Moderation
Distributed moderation can be thought of as the “voting system” of content moderation. For example – this moderation must be decided by a consensus of community members who cast their votes on specific content. Users employ a rating system to mark whether a piece of content adheres to guidelines. This method is seldom used because it poses significant challenges for brands in terms of legality.
5. Automated Moderation
Automated moderation is when a business chooses content moderation that is done strictly through Artificial Intelligence (AI). There are applications that can filter offensive words, but technology has limits. Take the case of Microsoft’s Tay — a Twitter bot that the company described as an experiment in “conversational understanding.” Tay was taught to say deeply disturbing and even racist things during its existence. This is an example of why automated moderation should always be guided by the human touch.
6. No Moderation
This is to be avoided! Beware. Many companies shockingly have no real system for their content moderation beyond the bare minimum (and even in some cases no minimal moderation). If this is an area your company chooses to ignore, beware of the consequences as you will be rolling the dice on your brand’s integrity and global reputation.
In short, content moderation is downright essential for your brand’s reputation and existence. Think of CM as the gatekeeper at the door of your business or the “bodyguard.” It is intended to spot problematic content which could put your brand in jeopardy – from the innocuous hateful comment to serious threats. Having a team, like Horatio, who can skillfully run this side of your business is essential in maintaining a healthy relationship with your consumers and the world at large.