Model key: profanity

To detect toxic language without swear words like “You are a monkey”, use the toxicity classifier.

Response example
{
  "found": true,
  "mode": "NORMAL",
  "matches": ["bollocks"]
}

Examples

ValueDetected byMatches
”fuck”NORMAL, SUSPICIOUS , PARANOIDfuck
”ffuuccckkk”SUSPICIOUS , PARANOIDffuuccckkk
”kcuf”PARANOIDkcuf

Wordlist support

This model supports wordlists if you want to allow or block specific words.

You can use the wordlist editor

Supported languages

  • English en

The model might work on other launguages we haven’t tested. Feel free to try it on launguages that are not listed above and provide us with feedback.

Limitations

This model does not have any API limitations.

Was this page helpful?