Depending on your plan your might have a limited number of models per project.

Depending on what you want to do with your project, you will need to add different models. For example, if you want to use the project for sentiment analysis, you will need to add a sentiment analysis model. If you want to detect swear words, there’s a profanity model for that.

Use multiple models to get a more complete analysis of your content. For example, you can use the toxicity model to detect toxic content and the NSFW model to detect sensitive content.

Each model will extend the API response with a new field: the key of the model and the result of the model. For example, if you add the toxicity analysis model, the API response will include a toxicity field with the result of the toxicity model.

  // ...
  "flagged": false,
  "original": "I like ice cream",
  "toxicity": {
    "label": "NEUTRAL",
    "score": 0.977389501,
    "label_scores": {
      "TOXICITY": 0.022610499,
      "PROFANITY": 0.016821137,
      "INSULT": 0.008937885,
      "THREAT": 0.0084793,
      "DISCRIMINATION": 0.0048097214,
      "SEVERE_TOXICITY": 0.0018978119,
      "NEUTRAL": 0.977389501

Find the details about each available model in the models section of the documentation.

Was this page helpful?