Entity matchers
Profanity matching
Detect and hide swear words in text.
Model key:
profanity
To detect toxic language without swear words like “You are a monkey”, use the toxicity classifier.
Response example
Examples
Value | Detected by | Matches |
---|---|---|
”fuck” | NORMAL , SUSPICIOUS , PARANOID | fuck |
”ffuuccckkk” | SUSPICIOUS , PARANOID | ffuuccckkk |
”kcuf” | PARANOID | kcuf |
Wordlist support
This model supports wordlists if you want to allow or block specific words.
You can use the wordlist editor
Supported languages
- English
en
The model might work on other launguages we haven’t tested. Feel free to try it on launguages that are not listed above and provide us with feedback.
Limitations
This model does not have any API limitations.