Classifiers overview
Classifiers are used to label a text with a specific category. For example, you can use a classifier to label a text as positive or negative.
Response signature
Each classifier model returns an object with the detected label and the respective scores. This object is added to the API response under the model’s key.
The most probable label. This will always be the label with the highest score. Returns null if the analyzer fails.
The score of the label. From 0-1 score with 1 meaning a high probability of being correct.
An object containing all the label scores.
Pre-built text classifiers
Toxicity
Detects toxic content.
NSFW
Detects NSFW content.
Propriety
Detects inappropriate content.
Sentiment
Positive, negative, or neutral.
Spam
Detects spam.
Language
Detects the language.
Sexual
Detects sexual content.
Discrimination
Detects discriminating content.
Self harm
Detects content related to self harm.
Violence
Detects content related to violence.