Pre-built Models
Image moderation
Image moderation models are used to detect inappropriate content in images.
We are offering two image moderation models.
- An NSFW model that detects a range of common inappropriate content
- A text model that detects text inside images.
If you need a label/model that is not currently supported, please contact us at support@moderationapi.com
Response signature
flagged
boolean
requiredlabels
array
requiredAn object containing all the label scores.
texts
string[]
The text which was detected in the image if the text model is used.
Image Moderation Response Example:
Labels
Label | Description |
---|---|
nudity | Exposed male or female genitalia, female nipples, sexual acts. |
suggestive | Partial nudity, kissing. |
gore | Blood, wounds, death. |
violence | Graphic violence, causing harm, weapons, self-harm. |
weapon | Weapons either in use or displayed. |
drugs | Drugs such as pills. |
hate | Symbols related to nazi, terrorist groups, white supremacy and more. |
smoking | Smoking or smoking related content. |
alcohol | Alcohol or alcohol related content. |
text | Text inside the picture |
Was this page helpful?