Ready-made Models
Image moderation
Image moderation models are used to detect inappropriate content in images.
We are offering two image moderation models.
- An NSFW model that detects a range of common inappropriate content
- A text model that detects text inside images.
If you need a label/model that is not currently supported, please contact us at support@moderationapi.com
Response signature
flagged
boolean
requiredlabels
array
requiredAn object containing all the label scores.
texts
string[]
The text which was detected in the image if the text model is used.
Image Moderation Response Example:
{
"flagged": true,
"labels": [
{
"label": "nudity",
"score": 0.996117
},
{
"label": "gore",
"score": 0.034441
},
{
"label": "suggestive",
"score": 0.004936
},
{
"label": "violence",
"score": 0.00036
},
{
"label": "weapon",
"score": 0.000079
},
{
"label": "drugs",
"score": 0.000034
},
{
"label": "hate",
"score": 0.000032
},
{
"label": "smoking",
"score": 0.000018
},
{
"label": "alcohol",
"score": 0.000005
},
{
"label": "text",
"score": 0.000005
}
]
}
Labels
Label | Description |
---|---|
nudity | Exposed male or female genitalia, female nipples, sexual acts. |
suggestive | Partial nudity, kissing. |
gore | Blood, wounds, death. |
violence | Graphic violence, causing harm, weapons, self-harm. |
weapon | Weapons either in use or displayed. |
drugs | Drugs such as pills. |
hate | Symbols related to nazi, terrorist groups, white supremacy and more. |
smoking | Smoking or smoking related content. |
alcohol | Alcohol or alcohol related content. |
text | Text inside the picture |
Was this page helpful?