Skip to main content
To analyze content, send a POST request to the /moderate endpoint. The API accepts different content types (text, image, object, video, audio) and returns results immediately.
import ModerationAPI from "@moderation-api/sdk";

// Configure with environment variable MODAPI_SECRET_KEY
const moderationApi = new ModerationAPI();

// Text moderation
const textResult = await moderationApi.content.submit({
  content: {
    type: "text",
    text: "Hello world!",
  },
  // Optional content data
  contentId: "text-1",
  authorId: "user-123",
  conversationId: "room-456",
  metadata: {
    customField: "value",
  },
});

// Use the API's recommendation
if (textResult.recommendation.action === "reject") {
  // Block the content
} else if (textResult.recommendation.action === "review") {
  // Send to moderation queue
} else {
  // Content approved - add to database
}

// Image moderation
const imageResult = await moderationApi.content.submit({
  content: {
    type: "image",
    url: "https://example.com/image.jpg",
  },
  // Optional content data
  contentId: "image-1",
  authorId: "user-123",
  metadata: {
    customField: "value",
  },
});

// Simple flagged check
if (imageResult.evaluation.flagged) {
  // Return error to user etc.
} else {
  // Add to database etc.
}

Content metadata

You can add metadata to the content you send for moderation. Some fields are used by the moderation pipeline to improve accuracy, while others enhance the dashboard experience.

contentId

Specify a contentId to associate the request with specific content. This is typically the content’s unique identifier from your database. If you don’t specify a contentId, the API generates a random ID for the content. When you include a contentId, submitting the same ID again updates the existing content. This is useful when using review queues - you can update content in the queue without creating duplicate items. The contentId can also be used to execute actions in the review queue programmatically. For example, you can allow users to report content on your platform and then add it to a review queue.

conversationId

Use conversationId to group related content together, such as messages in a chatroom or comments on a post. If you’re using review queues, the conversationId can filter the queue to show content from a specific conversation. Enable context awareness to improve moderation accuracy using the conversationId.

authorId

Use authorId to identify the user who created the content. This enables user-level moderation in review queues and allows filtering by specific users. Enable context awareness to improve moderation accuracy using the authorId.

metaType

Use metaType to specify what kind of content you’re moderating. This helps the API apply appropriate analysis:
ValueUse case
messageChat messages, direct messages
postForum posts, social media posts
commentComments on posts or articles
reviewProduct or service reviews
profileUser profile information
productProduct listings
eventEvent descriptions
otherAny other content type
const result = await moderationApi.content.submit({
  content: {
    type: "text",
    text: "Great product, highly recommend!",
  },
  metaType: "review",
  authorId: "user-123",
});

channel

Use channel to route content to a specific channel configuration. If not provided, the project’s default channel is used.
const result = await moderationApi.content.submit({
  content: {
    type: "text",
    text: "Hello world!",
  },
  channel: "high-risk-content",
});

metadata

Use metadata to attach any additional information to the request. This object can contain custom key-value pairs. Metadata is displayed in review queues and included in webhooks.
If you add a link in metadata, it will be clickable from the review queue. This is useful for linking back to the original content in your application.

Context awareness

Enable Context awareness in your channel settings, then include authorId and/or conversationId in API requests. This allows models to analyze previous messages for improved accuracy. When context awareness is enabled, models analyze the current message alongside recent conversation history. The API retrieves previous messages with the same conversationId or authorId and provides them to the model sequentially, allowing it to understand the full context before making a decision. LLM-based policies can use the conversationId to see previous messages in the same conversation, and authorId to see previous messages from the same author. This can prevent unwanted content spread across multiple messages:
msg 1 -> f
msg 2 -> u
msg 3 -> c
msg 4 -> k [FLAGGED with context awareness]
It also helps understand messages in the context of a conversation:
user 1 -> What's the worst thing you know?
user 2 -> European people [FLAGGED with context awareness]

Content types

The /moderate endpoint accepts different content types through the content object:

Text

Text moderation is the most common type. Use it for:
  • Chat messages
  • Forum posts
  • Comments
  • Reviews
  • Product descriptions
  • Profile bios
const result = await moderationApi.content.submit({
  content: {
    type: "text",
    text: "Hello world!",
  },
});
If you’re analyzing chat messages or thread-based content, enable context awareness for better accuracy.

Image

Image moderation analyzes visual content to detect inappropriate or harmful images, including nudity, violence, or other objectionable content.
const result = await moderationApi.content.submit({
  content: {
    type: "image",
    url: "https://example.com/image.jpg",
  },
});

Object

Object moderation analyzes multiple fields at once, useful for moderating entire entities like user profiles or product listings.
const result = await moderationApi.content.submit({
  content: {
    type: "object",
    data: {
      title: { type: "text", text: "Product name" },
      description: { type: "text", text: "Product description" },
      image: { type: "image", url: "https://example.com/product.jpg" },
    },
  },
  metaType: "product",
});
The response includes flagged_fields in each policy result, showing which specific fields triggered the flag.

Audio (enterprise)

Audio moderation analyzes audio content to detect inappropriate language or sounds. Useful for podcasts, voice messages, or other audio content.
const result = await moderationApi.content.submit({
  content: {
    type: "audio",
    url: "https://example.com/audio.mp3",
  },
});

Video (enterprise)

Video moderation analyzes video content to detect inappropriate scenes or actions, including violence, nudity, or other objectionable content.
const result = await moderationApi.content.submit({
  content: {
    type: "video",
    url: "https://example.com/video.mp4",
  },
});

Opt out of content store

Set doNotStore to true to prevent the content from being stored. The content will still be analyzed but won’t appear in the dashboard or review queues.
const result = await moderationApi.content.submit({
  content: {
    type: "text",
    text: "Hello world!",
  },
  doNotStore: true,
});
Setting doNotStore to true will make parts of the moderation dashboard less useful, as content won’t be available for review or analysis.
Do not disable content storage if you want to train or optimize models based on your data.