Pre-built Models
Image moderation
Image moderation models are used to detect inappropriate content in images.
We are offering three image moderation models.
- An NSFW model that detects a range of common inappropriate content
- A toxicity model that detects general toxic content
- A text model that detects text inside images.
Keep in mind you need to add each model to your project separately.
If you need a label/model that is not currently supported, please contact us at support@moderationapi.com
Response signature
Whether the image was flagged by any of the models.
An object containing all the label scores.
The text which was detected in the image if the text model is used.
Image Moderation Response Example:
Labels
Label | Description |
---|---|
toxicity | Harmful content such as violence, offensive memes, hate, etc. |
nudity | Exposed male or female genitalia, female nipples, sexual acts. |
suggestive | Partial nudity, kissing. |
gore | Blood, wounds, death. |
violence | Graphic violence, causing harm, weapons, self-harm. |
weapon | Weapons either in use or displayed. |
drugs | Drugs such as pills. |
hate | Symbols related to nazi, terrorist groups, white supremacy and more. |
smoking | Smoking or smoking related content. |
alcohol | Alcohol or alcohol related content. |
text | Text inside the picture |
Was this page helpful?