Model key: violence

Response Example:
  "label": "VIOLENCE",
  "score": 0.99194,
  "label_scores": {
    "VIOLENCE": 0.99194,
    "NEUTRAL": 0.00806


VIOLENCEThe content contains violent intent, or depicts violence.
NEUTRALThe content does not contain violence.

Supported languages

This model works in the following languages:

  • English en
  • Spanish es
  • French fr
  • German de
  • Italian it
  • Portuguese pt
  • Russian ru
  • Japanese ja
  • Indonesian id
  • Chinese zh
  • Dutch nl
  • Polish pl
  • Swedish sv

The model might work on other launguages we haven't tested. Feel free to try it on launguages that are not listed above and provide us with feedback.


This model does not have any API limitations.

Was this page helpful?