We are offering two image moderation models.

  • An NSFW model that detects a range of common inappropriate content
  • A text model that detects text inside images.

If you need a label/model that is not currently supported, please contact us at support@moderationapi.com

Response signature

flagged
boolean
required
labels
array
required

An object containing all the label scores.

texts
string[]

The text which was detected in the image if the text model is used.

Image Moderation Response Example:
{
  "flagged": true,
  "labels": [
    {
      "label": "nudity",
      "score": 0.996117
    },
    {
      "label": "gore",
      "score": 0.034441
    },
    {
      "label": "suggestive",
      "score": 0.004936
    },
    {
      "label": "violence",
      "score": 0.00036
    },
    {
      "label": "weapon",
      "score": 0.000079
    },
    {
      "label": "drugs",
      "score": 0.000034
    },
    {
      "label": "hate",
      "score": 0.000032
    },
    {
      "label": "smoking",
      "score": 0.000018
    },
    {
      "label": "alcohol",
      "score": 0.000005
    },
    {
      "label": "text",
      "score": 0.000005
    }
  ]
}

Labels

LabelDescription
nudityExposed male or female genitalia, female nipples, sexual acts.
suggestivePartial nudity, kissing.
goreBlood, wounds, death.
violenceGraphic violence, causing harm, weapons, self-harm.
weaponWeapons either in use or displayed.
drugsDrugs such as pills.
hateSymbols related to nazi, terrorist groups, white supremacy and more.
smokingSmoking or smoking related content.
alcoholAlcohol or alcohol related content.
textText inside the picture

Was this page helpful?