Introduction

In this guide we’ll enable users to report other profiles for a fictional dating application called Wizard Dating.

When users report a profile, we’ll add it to a review queue so that an admin can review it. If the admin decides to remove the profile, we’ll remove it from our application.

Goals

  • Get an overview of reported profiles
  • Remove reported profiles from our application
  • Analyze reported profiles to focus on the worst offenders

Setting up the dashboard

Let’s prepare everything in the dashboard first. We’ll need to create a project, a review queue, and a custom action.

Add a new project

We’ll start by creating a new project called Flagged Profiles and we’ll add a couple of models to it: the toxicity, the nsfw, and the propriety model.

We keep the flagging thresholds at their default values for now.

Add a new review queue

Next lets create a queue to review content submitted to our project. We’ll call it Reported Profiles.

To make sure that only reported profiles show up in this queue we configure the queue to only show items submitted to the project Flagged profiles.

We also make sure to configure the queue to show all items instead of just flagged items. This way we can see all the profiles that have been reported, even if they haven’t been flagged by the models in our project.

Lastly, we’ll configure the queue to only show items from the last month to keep it focused.

Add a removal action to the queue

Now we have prepared our dashboard to handle reporting profiles, but we also want to add an action to the queue, so that we can manually remove profiles from our application.

We call the action Remove profile and configure it to only show up our newly created queue.

We also want the action to remove the profile from the queue once the action is executed, so we check “Action resolves items”.

Lastly, we configure it to call a webhook on our application servers.

Application code

Next we need to write some code for our application to handle user reports and removing profiles.

Adding report functionality to our application

Back in our dating app, we’ll add a function in our code to handle user reports.

In this example we are using Moderation API’s Node SDK, but you can use the REST API with any language.

To use Moderation API in our application we’ll need to use our project’s API key. The project API key can be found in the project dashboard under integrate in the sidebar.

We’ll call the moderate/text endpoint to analyze the reported profile and add it to the review queue.

Node.js
import ModerationApi from "@moderation-api/sdk";

const moderationApi = new ModerationApi({
  key: PROJECT_API_KEY,
});

const handleUserReport = async (profile, reportingUser) => {
  // Adding the user to the review queue and analyzing the profile
  const analysis = await moderationApi.moderate.text({
    value: profile.bio,
    authorId: profile.id,
    metadata: {
      url: profile.url, // so we can quickly view the profile from the queue
      reportingUser: reportingUser.id, // just for our own reference
    },
  });

  // Checking if any of the models flagged the profile
  if (analysis.flagged) {
    // Here we could take immediate action like hiding the profile
  }
};

Handling the webhook

Next we’ll add a webhook handler to handle the Remove profile action.

In this example we’re using Next.js and Moderation API’s Node SDK to verify the webhook signature. See the webhook documentation for more information.

Next.js webhook verification with Node SDK
import ModerationApi from "@moderation-api/sdk";

const moderationApi = new ModerationApi({
  key: PROJECT_API_KEY,
});

import { buffer } from "micro";

const handler = async (req, res) => {
  const webhookRawBody = await buffer(req);
  const webhookSignatureHeader = req.headers["modapi-signature"];

  // securely verify the webhook and get the payload
  const payload = await moderationApi.webhooks.constructEvent(
    webhookRawBody,
    webhookSignatureHeader,
    process.env.MODAPI_WEBHOOK_SECRET
  );

  const { type, queue, item, action } = payload;

  if (type === "QUEUE_ITEM_ACTION" && action.name === "Remove profile") {
    const { authorId } = item;

    // remove the user from our application
    await db.user.update({
      where: {
        userId: authorId,
      },
      data: {
        banned: true,
      },
    });
  }

  return res.status(200).send();
};

// disable body parser so we can access raw body
export const config = {
  api: {
    bodyParser: false,
  },
};

export default handler;

Using our review queue

Now that we have everything in place we can start using our review queue.

I’ve already received a few reports from users on our wizard dating app, so now we’ll go through reviewing each of them.

Upon opening the queue we can see that we have 3 items waiting to be reviewed. Using the chart, we can also see we have some unresolved items under TOXICITY and THREAT, so we might want to focus on those first.

Focusing on the worst offenders

To focus on profiles flagged by our model analysis we can use the queue filter.

We can do this by opening the filter and selecting the labels we want to focus on. A faster way is to press on the labels in the chart.

Afte setting the filter, we see 1 item with these serious labels.

Removing a profile

Upon opening the first item we see the content.

“I hate wizards and I will curse you if you message me!”

We can see that content has been flagged correctly with the labels THREAT and TOXICITY.

This profile is clearly not a good fit for our application, so we’ll remove it by clicking on the Remove profile action. This will immediately call our webhook and remove the item from the queue as we’ve configured.

Keeping a profile

Next, let’s open the filter and press Reset so we can see all the remaining items again.

Now we see that we have 2 items left to review.

Let’s open the first one:

“I’m actually a muggle but I’m looking for a something magical”

This item has not been flagged by any of the models, but it has been reported by a user.

Notice (in the screenshot above) how we can see all the metadata we’ve submitted with the item. The profile URL is clickable so we can quickly view the profile to check if the profile has any other issues.

Everything seems fine, so we just press the Resolve button to remove the item from the queue.

The next item automatically opens:

“I am a wizard looking for a witch to cast a spell on me ✨”

This item also seems fine, so we’ll resolve it as well.

All done!

Now the queue is empty and we’ve successfully reviewed all the reported profiles.

Here’s what we’ve accomplished:

  • Added a way for users to report profiles
  • Added a review queue to handle these reports.
  • Added a webhook to remove profiles from our application.

Next steps

  • Invite moderators to your queue
  • Use the data to train a model to recognize bad profiles
  • Analyze profiles on signup

Was this page helpful?