Model key: toxicity
Works on the whole text to detect general features like profanity, swearing, racism, threats etc. Contrary to our profanity filter the toxicity analyzer will detect cases where the profanity is not as pronounced.
Label | Description |
---|---|
TOXICITY | The general toxicity. If any other labels have a high score, this one is likely to score high as well. |
PROFANITY | Containing swearing, curse words, and other obscene language. |
DISCRIMINATION | Racism and other discrimination based on race, religion, gender, etc. |
INSULT | Insulting, inflammatory, or negative language. |
SEVERE_TOXICITY | Very toxic, containing severe profanity, racism, etc. |
THREAT | Threatening, bullying, or aggressive language. |
NEUTRAL | Nothing toxic was detected. |
en
es
fr
de
it
pt
ru
ja
id
zh
nl
pl
sv