Moderation actions provide direct moderation controls for managing problematic users in your community. Common scenarios for moderation actions:
  • Block persistent violators identified through trust levels or fraud detection
  • Remove spam accounts posting promotional content repeatedly
  • Suspend users to send clear signals about acceptable behavior
Actions can be executed manually from the user dashboard or programmatically via the Author API. They complement content moderation by addressing the user behind the content, helping you stop persistent abuse and enforce community standards.

Built-in actions

The system provides three essential moderation actions for comprehensive account management:

Action customization

You can customize how actions appear and behave: Value customization:
  • Predefined values: Add custom dropdown options like “Spam”, “Harassment”, “Fraud”, “Policy Violation”
  • Free text option: Enable “Other” option that shows a text input field for custom reasons
  • Combined approach: Mix predefined categories with free text for flexibility

Adding custom actions

Beyond built-in actions, create custom author-level actions for specialized workflows that show up alongside standard actions. Use cases for custom actions:
  • Warning systems: Formal warnings that don’t restrict access
  • Verification requirements: Require additional account verification
  • Feature restrictions: Limit specific platform features
  • Escalation triggers: Automatically escalate to senior moderators
To create custom actions, choose “Author level action” when creating a new action in the dashboard. These appear alongside built-in actions on user detail pages.

Executing actions

  1. Navigate to the user detail page in the user dashboard
  2. Select the appropriate action (Block, Suspend, or Enable)
  3. Choose from predefined reason options or select “Other” for free text
  4. For suspensions: select duration from dropdown or set custom end time
  5. Confirm the action
Reason field importance:
  • Required for documentation and potential appeals
  • Visible to moderators in user timeline
  • Returned in API responses for application logic
  • Helps identify patterns in violation types
Timeline
User timeline showing content submissions and moderation actions
All moderation actions are automatically tracked in the user’s timeline in the dashboard:
  • Action history: Complete chronological record of all actions taken
  • Moderator attribution: Which team member executed each action
  • Reason logging: Full context for every moderation decision
  • Audit trail: Comprehensive record for compliance and appeals
This timeline helps moderators understand user history and make informed decisions about future actions.

Technical implementation

Moderation actions modify several fields returned in API responses, allowing seamless integration with your application logic.

Status fields

Monitor user status through API responses:
{
  "id": "user123",
  "status": "suspended", // "enabled", "blocked", "suspended"
  "block": {
    "until": 1705320000000, // timestamp in ms, null for permanent blocks
    "reason": "Harassment of other users"
  }
}

Content submission integration

The system automatically handles blocked/suspended users when content is submitted to moderation endpoints, but you can also check status proactively in your application by querying the user in the Author API.

Webhooks

Receive real-time notifications when moderation actions are executed through webhook integration: Common webhook use cases:
  • Sync with external systems like customer support platforms
  • Trigger email notifications to affected users
  • Update internal user databases with moderation status
  • Log actions for compliance and audit requirements
  • Alert security teams about fraud-related blocks
Configure webhooks to receive instant notifications whenever blocks, suspensions, or other moderation actions are executed.

FAQ