Quick Guides: Get started with socially
  • 🚀Get Started
  • 🎨Branding
  • 🤡Gamification
  • 🌐Custom Domain
  • 🔐Security
    • Data hosting & privacy
  • 👨‍🔧General setup(Email, Google maps etc)
    • 📧SMTP Setup
    • 🔑Recaptcha key
    • 🔑Google Maps API Key
    • 🔑Google translate API key
    • 🔑Giphy
    • 🔑Chat Gpt & Replicate AI
    • 🔑Social Login
    • Change Language
  • 👨‍🏫Moderation
  • 💰Payment Methods
    • 💸Currency Settings
  • 📹Video & Audio setup
  • 🗣️Streaming
  • 🍯Monetisation
    • 📲Subscription system
    • 🌠Ad engine
    • 🎪Marketplace(buy/sell)
    • 💿Self hosted ads
    • 🐒Virtual currency
    • 🤑Post Monetisatoin
  • 🤼Lifetime Deal Activation
  • 🥱Quick Guide
  • 🏛️Architecture
  • Courses
    • Features Overview: Courses
Powered by GitBook
On this page

Was this helpful?

Moderation

PreviousChange LanguageNextPayment Methods

Last updated 1 year ago

Was this helpful?

Content moderation is an essential part of maintaining a safe and healthy online environment. It helps to prevent the spread of hate speech, harassment, and other forms of harmful content that can negatively impact individuals and communities. Effective content moderation also helps to prevent the dissemination of fake news and misinformation, which can be particularly damaging during times of crisis.

There are several different approaches to content moderation, ranging from automated tools to human oversight. Automated tools use algorithms and machine learning to identify and flag potentially problematic content, while human moderators review flagged content and make decisions on whether to remove or allow it. Some platforms use a combination of both approaches for more effective content moderation.

Content moderation can be a challenging and complex task, particularly in large online communities with millions of users. It requires a deep understanding of community standards and policies, as well as the ability to make quick and informed decisions. Despite these challenges, content moderation remains an essential part of maintaining a safe and healthy online environment for all users.

The platform has built in moderation wherein a user can report posts & admin can see it in the backoffice & decide the next steps whether to approve or delete the posts.

By default, for any new community, all posts require administrator's approval. You can disable this in the backoffice by going to Manage features- Manage posts.

  1. Google Vision API is used to filter adult content by bluring the post, or deleting it.

  • Go to the console and log in using your Google credentials.

  • After this click on link for create new project.

  • Enter Project Name and click on Create button.

  • Select the project from the dropdown menu beside Google Cloud Platform.

  • Click on then click on +ENABLE APIS AND SERVICES .

  • Search for Cloud Vision API and enable it.

  • Once enabled, click on , on the top nav-bar, click on +Create credentials then click on API key

  • Grab the key and Click Close.

  • Go to Admin Panel -> Settings -> Post Settings -> Post Settings -> Adult Images Settings, and edit the following options:

    1. Enable Adult Images Filtration by clicking on the toggle.

    2. Vision API key - Enter the API key you created in the previous chapter.

  1. Approval system for blogs: Once enabled, admin needs to approve each blog before its published by users. Further admin can enable, how many times user's can post in one hour to prevent spam.

  1. Reserved usernames: Helps the platform to preserve public personalities usernames which can be easily misused

  1. Post(photo/video/article) DELETE: Option for admin to straightaway delete posts from backend.

  2. REPORT post: Option for users to report a post which will be shown on backend to the administrators

  3. Actions on users: Ban user, Delete user permanently

  4. Post/Blogs Approval: Admins can set if they want to approve each post/blog before publishing.

👨‍🏫
Google API Manager
Create Project
APIs & Services
Credentials
Page cover image
Blog Approval
Post Approval