1. Create a project

Visit the dashboard and create a new project. You can either create a project using one of the templates or create a blank project.

You can add multiple models to a project. Each model will detect a different type of content. For example, you can create a project with a toxicity model and a profanity model to prevent innapropriate content from being posted on your platform.

Content moderation project

2. Submit content for analysis

After configuring up your project you can submit content to the project for analysis.

  • Using the dashboard

  • Using the API

  • Using an integration

The easiest way to submit content for analysis is to use the dashboard. You can submit content for analysis by clicking on the Send API request button on the project page.

This is mainly useful for testing and debugging purposes while configuring your project.

3. Review flagged content (optional)

Content that is flagged by the model will be sent to the Review Queue. This is useful if you want to combine automated content moderation with human moderation, or if you simply want to review what content is being flagged.

Review queue for reviewing and improving automated moderation

Some of the things you can do with review queues:

  • Review content before it’s published
  • Perform a manual review of content that was automatically rejected
  • Review content that was flagged by users
  • Ban users that are submitting unwanted content
  • Improve your models by correcting mistakes
  • Get insights into the content that is being submitted
  • Invite moderators to help you review content

All done!

Congrats! You’ve now automated your content moderation! Need support or want to give some feedback? You can drop us an email at support@moderationapi.com.

Was this page helpful?