Introduction

Enabling user management in Moderation API solves some of the most common moderation challenges for community platforms. This is done by introducing the concept of authors (users) in the moderation process. Each author has a trust level that is automatically adjusted based on their behavior and moderation history. User management lets you:
  • Tailor and improve moderation accuracy based on trust levels
  • Skip moderation for trusted users
  • Block or suspend users
  • Fraud detection using behavioral analysis

Get started

Just add an authorId when submitting content to a moderation endpoint. We recommend using the user ID from your system but any unique identifier like email or username will work. You should now see your users appear in the user dashboard.
There might be a delay in user creation and trust level calculation. This is because we process users in batches to optimize performance.

Typical workflow

Here’s a common workflow for implementing user management in your moderation system:
1

Submit content with author ID

Start by including an authorId when submitting content for moderation:
const response = await fetch("/api/v1/moderate/text", {
  method: "POST",
  headers: { Authorization: "Bearer YOUR_API_KEY" },
  body: JSON.stringify({
    content: "User's message content",
    authorId: "user_123",
  }),
});
Authors are automatically created on first submission.
2

Sync author details

Add additional author information to improve trust scoring and enable fraud detection:
await fetch("/api/v1/authors/user_123", {
  method: "PUT",
  headers: { Authorization: "Bearer YOUR_API_KEY" },
  body: JSON.stringify({
    name: "John Doe",
    email: "john@example.com",
    metadata: {
      email_verified: true,
      is_paying_customer: true,
    },
  }),
});
3

Monitor trust levels

Check the user dashboard or use the API to monitor trust levels:
  • New users (Level 0): Extra scrutiny for first-time posters
  • Established users (Level 2+): Reduced moderation overhead
  • Problematic users (Level -1): Automatic flagging for review
4

Take moderation actions

User actions dashboard
When you identify problematic users, take action from the user dashboard:The user’s status will be updated and your application can respond accordingly.
5

Check user status in your app

Before allowing users to post, check their status and respond appropriately:
const author = await fetch(`/api/v1/authors/${userId}`);
const userData = await author.json();

if (userData.status !== "enabled") {
  if (userData.status === "suspended") {
    const suspendedUntil = new Date(userData.block.until);
    throw new Error(
      `Account suspended until ${suspendedUntil.toLocaleDateString()}`
    );
  } else if (userData.status === "blocked") {
    throw new Error("Account has been permanently blocked");
  }
}

// User is enabled - allow posting
await publishContent(content);
6

Handle webhook notifications

Set up webhooks to automatically notify users of moderation actions:
// Webhook handler for user actions
app.post("/webhook/user-actions", (req, res) => {
  const { action, userId, reason, duration } = req.body;

  if (action === "AUTHOR_BLOCK_TEMP") {
    // Send warning email to suspended user
    sendEmail(userId, {
      subject: "Account Temporarily Suspended",
      body: `Your account has been suspended. Reason: ${reason}`,
      unsuspendDate: new Date(Date.now() + duration),
    });
  }

  res.status(200).send("OK");
});