1. Create a project

Visit the dashboard and create a new project. You can either create a project using one of the templates or create a blank project.

You can add multiple models to a project. Each model will detect a different type of content. For example, you can create a project with a toxicity model and a profanity model to prevent innapropriate content from being posted on your platform.

Content moderation project

2. Submit content for analysis

After configuring up your project you can submit content to the project for analysis.

To submit content using the API you’ll need your project’s API key. You can find your project’s API key on the project page under the settings tab. Use the API key to authenticate your requests to the API by adding it to the Authorization header.

Now you can submit content for analysis by sending a POST request to the /api/v1/moderate/text endpoint.

Or use one of our server SDKs to make it even easier to integrate with the API. In this example we will use the Node.js SDK.

Install the SDK

npm install @moderation-api/sdk

Submit content using the SDK

import ModerationApi from "@moderation-api/sdk";

const moderationApi = new ModerationApi({
  key: process.env.API_KEY,
});

// Text moderation
const textAnalysis = await moderationApi.moderate.text({
  value: "Hello world!",
  // Optional content data
  authorId: "123",
  contextId: "456",
  metadata: {
    customField: "value",
  },
});

if (textAnalysis.flagged) {
  // Return error to user etc.
} else {
  // Add to database etc.
}

// Image moderation
const imageAnalysis = await moderationApi.moderate.image({
  url: "https://example.com/image.jpg",
  // Optional content data
  authorId: "123",
  contextId: "456",
  metadata: {
    customField: "value",
  },
});

if (imageAnalysis.flagged) {
  // Return error to user etc.
} else {
  // Add to database etc.
}

Utilize the flagged field to determine how to handle the content. For example if the flagged field is true then the content should be blocked. If the flagged field is false then the content is safe to be posted.

3. Review flagged content (optional)

Content that is flagged by the model will be sent to the Review Queue. This is useful if you want to combine automated content moderation with human moderation, or if you simply want to review what content is being flagged.

Review queue for reviewing and improving automated moderation

Some of the things you can do with review queues:

  • Review content before it’s published
  • Perform a manual review of content that was automatically rejected
  • Review content that was flagged by users
  • Ban users that are submitting unwanted content
  • Improve your models by correcting mistakes
  • Get insights into the content that is being submitted
  • Invite moderators to help you review content

All done!

Congrats! You’ve now automated your content moderation! Need support or want to give some feedback? You can drop us an email at support@moderationapi.com.

Was this page helpful?