1. Create a project

Visit the dashboard and create a new project. You can either create a project using one of the templates or create a blank project.

You can add multiple models to a project. Each model will detect a different type of content. For example, you can create a project with a toxicity model and a profanity model to prevent innapropriate content from being posted on your platform.

2. Submit content for analysis

After configuring up your project you can submit content to the project for analysis.

To submit content using the API you’ll need your project’s API key. You can find your project’s API key on the project page under the settings tab. Use the API key to authenticate your requests to the API by adding it to the Authorization header.

Now you can submit content for analysis by sending a POST request to the /api/v1/moderate/text endpoint.

Or use one of our server SDKs to make it even easier to integrate with the API. In this example we will use the Node.js SDK.

Install the SDK

Submit content using the SDK

Utilize the flagged field to determine how to handle the content. For example if the flagged field is true then the content should be blocked. If the flagged field is false then the content is safe to be posted.

Dry-run on your production data

You can submit content for analysis without actually moderating it. This is useful for testing and debugging purposes while configuring your project, or simply to get an idea of how the moderation models work on your data.

Essentially this means the API will always return flagged: false, but the content is still analyzed and will show up as flagged in the dashboard.

Enable dry-run mode per project in the settings tab.

3. Review flagged content (optional)

Content that is flagged by the model will be sent to the Review Queue. This is useful if you want to combine automated content moderation with human moderation, or if you simply want to review what content is being flagged.

Some of the things you can do with review queues:

  • Review content before it’s published
  • Perform a manual review of content that was automatically rejected
  • Review content that was flagged by users
  • Ban users that are submitting unwanted content
  • Improve your models by correcting mistakes
  • Get insights into the content that is being submitted
  • Invite moderators to help you review content

All done!

Congrats! You’ve now automated your content moderation! Need support or want to give some feedback? You can drop us an email at support@moderationapi.com.

Was this page helpful?