We are offering three image moderation models.

  • An NSFW model that detects a range of common inappropriate content
  • A toxicity model that detects general toxic content
  • A text model that detects text inside images.

Keep in mind you need to add each model to your project separately.

If you need a label/model that is not currently supported, please contact us at support@moderationapi.com

Response signature

flagged
boolean
required

Whether the image was flagged by any of the models.

labels
array
required

An object containing all the label scores.

texts
string[]

The text which was detected in the image if the text model is used.

Image Moderation Response Example:
{
  "flagged": true,
  "labels": [
    {
      "label": "toxicity",
      "score": 0.996117
    },
    {
      "label": "nudity",
      "score": 0.996117
    },
    {
      "label": "gore",
      "score": 0.034441
    },
    {
      "label": "suggestive",
      "score": 0.004936
    },
    {
      "label": "violence",
      "score": 0.00036
    },
    {
      "label": "weapon",
      "score": 0.000079
    },
    {
      "label": "drugs",
      "score": 0.000034
    },
    {
      "label": "hate",
      "score": 0.000032
    },
    {
      "label": "smoking",
      "score": 0.000018
    },
    {
      "label": "alcohol",
      "score": 0.000005
    },
    {
      "label": "text",
      "score": 0.000005
    }
  ]
}

Labels

LabelDescription
toxicityHarmful content such as violence, offensive memes, hate, etc.
nudityExposed male or female genitalia, female nipples, sexual acts.
suggestivePartial nudity, kissing.
goreBlood, wounds, death.
violenceGraphic violence, causing harm, weapons, self-harm.
weaponWeapons either in use or displayed.
drugsDrugs such as pills.
hateSymbols related to nazi, terrorist groups, white supremacy and more.
smokingSmoking or smoking related content.
alcoholAlcohol or alcohol related content.
textText inside the picture

Was this page helpful?