Model key: profanity
To detect toxic language without swear words like “You are a monkey”, use the
toxicity classifier.
Response example
Examples
| Value | Detected by | Matches |
|---|---|---|
| ”fuck” | NORMAL, SUSPICIOUS , PARANOID | fuck |
| ”ffuuccckkk” | SUSPICIOUS , PARANOID | ffuuccckkk |
| ”kcuf” | PARANOID | kcuf |
Wordlist support
This model supports wordlists if you want to allow or block specific words. You can use the wordlist editorSupported languages
- English
en
The model might work on other launguages we haven’t tested. Feel free to try
it on launguages that are not listed above and provide us with feedback.