Prerequisites

You'll need an API key to use the Moderation API. To get one, you'll need to create a project.


1. Install SDK

We currently don’t have an official SDK for Go, but you can use the OpenAPI Generator to generate a Go client or simply call the API directly using Go’s standard net/http package.

2. Submit content

Grab the API key from your project and begin submitting text, images, or other media to your project for moderation.

package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "net/http"
)

func main() {
    // Configure the client
    apiKey := "your-api-key" // Replace with your API key
    baseURL := "https://moderationapi.com/api/v1"

    // Prepare request body
    requestBody, err := json.Marshal(map[string]interface{}{
        "value":     "Hello world!",
        "authorId":  "123",
        "contextId": "456",
        "metadata": map[string]interface{}{
            "customField": "value",
        },
    })
    if err != nil {
        panic(err)
    }

    // Create request
    req, err := http.NewRequest("POST", baseURL+"/moderate/text", bytes.NewBuffer(requestBody))
    if err != nil {
        panic(err)
    }

    // Set headers
    req.Header.Set("Authorization", "Bearer "+apiKey)
    req.Header.Set("Content-Type", "application/json")

    // Send request
    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    // Read response
    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        panic(err)
    }

    // Parse response
    var textAnalysis map[string]interface{}
    if err := json.Unmarshal(body, &textAnalysis); err != nil {
        panic(err)
    }

    if textAnalysis["flagged"].(bool) {
        fmt.Println("Text content flagged")
        // Block the content, show an error, etc...
    } else {
        fmt.Println("Text content is safe.")
        // Save to database or proceed...
    }
}

Dry-run mode: If you want to analyze production data but don't want to block content, enable "dry-run" in your project settings.

With dry-run enabled, the API still analyzes content but it always returns flagged: false - yet content still shows in the review queue. This way you can implement your moderation workflows and start testing your project configuration without actually blocking content.


3. Review flagged content (optional)

If the AI flags the content, it will appear in the Review Queue.

Head to the review queue to validate that the content is submitted correctly.

Review queue for reviewing and improving automated moderation

You can use review queues to implement moderation workflows or simply check how the AI is performing.


All Done!

Congratulations! You've run your first moderation checks. Here are a few next steps:

  • Continue tweaking your project settings and models to find the best moderation outcomes.
  • Create an AI agent and add your guidelines to it.
  • Explore advanced features like context-aware moderation.
  • If you have questions, reach out to our support team.

We love hearing from you—please share how you're using the service and let us know if you have suggestions or need help. Happy moderating!

Was this page helpful?