Our Policies

/

Moderation Policy

Moderation Policy

Content Review Framework and Principles

Our content moderation approach balances the protection of user safety with respect for freedom of expression and diverse viewpoints. We recognize that content moderation involves complex judgments about context, intent, and cultural differences, and we strive to make decisions that are fair, consistent, and transparent. Our moderation framework is built on the principle that users should be able to express themselves authentically while being protected from content that could cause harm.

We employ a combination of automated systems and human reviewers to moderate content at scale while maintaining accuracy and nuance in our decisions. Our automated systems are designed to detect potentially violating content and prioritize it for human review, rather than making final moderation decisions without human oversight. Human moderators receive extensive training on our policies, cultural sensitivity, and the importance of considering context when making moderation decisions.

Automated Detection and Human Review Process

Our automated content moderation systems use advanced machine learning algorithms trained on millions of examples of policy-violating content. These systems can detect various types of harmful content, including hate speech, harassment, violent content, and spam, with increasing accuracy and speed. However, we recognize that automated systems have limitations, particularly in understanding context, sarcasm, and cultural nuances.

All automated moderation decisions are subject to human review, either proactively or upon user appeal. Our human moderators are trained specialists who understand the cultural and contextual factors that automated systems may miss. They work in teams that include native speakers of various languages and experts in different cultural contexts to ensure that moderation decisions are appropriate across our global user base.

Enforcement Actions and Graduated Response

Our enforcement approach is designed to be educational and rehabilitative rather than purely punitive. For first-time or minor violations, we may issue warnings, provide educational resources, or temporarily limit certain account features while allowing the user to continue using our platform. This graduated approach recognizes that many policy violations result from misunderstanding rather than malicious intent.

For more serious violations or repeat offenses, we may implement temporary suspensions of varying lengths, depending on the severity of the violation and the user's history on our platform. Permanent bans are reserved for the most serious violations, such as content that promotes violence, exploits children, or represents coordinated attempts to manipulate our platform. We maintain detailed records of all enforcement actions to ensure consistency and to track patterns of behavior.

Appeals Process and Independent Review

We believe that users should have the right to challenge our moderation decisions, and we have established a comprehensive appeals process that includes multiple levels of review. Users can appeal any moderation action through our platform, and appeals are reviewed by moderators who were not involved in the original decision. For complex cases or significant enforcement actions, appeals may be reviewed by senior moderation specialists or subject matter experts.

We are committed to establishing an independent oversight board that includes external experts in human rights, free expression, and digital governance to review certain types of moderation decisions and provide recommendations for policy improvements. This board will have the authority to overturn our moderation decisions in certain cases and will publish regular reports on content moderation trends and challenges.

Cultural Sensitivity and Global Considerations

Content moderation must account for cultural differences, local laws, and varying social norms across the diverse communities that use our platform. We employ moderators from different cultural backgrounds and provide specialized training on cultural sensitivity and local context. Our policies are designed to protect users from harm while respecting cultural differences in communication styles and social norms.

We recognize that certain content may be acceptable in some cultural contexts but harmful in others, and we strive to make moderation decisions that consider the intended audience and cultural context of the content. We also work with local experts and community leaders to understand cultural nuances and ensure that our moderation practices are appropriate for different regions and communities.

Transparency and Accountability Measures

We publish regular transparency reports that provide detailed statistics on our content moderation activities, including the volume and types of content removed, the number of accounts suspended or banned, and the outcomes of appeals. These reports also include information about government requests for content removal or user data, helping users understand how external pressures may affect content moderation decisions.

We are committed to ongoing dialogue with users, civil society organizations, and policy experts about our moderation practices and policies. We regularly solicit feedback through public consultations, academic partnerships, and community forums, and we use this feedback to improve our policies and practices. We also participate in industry initiatives to develop best practices for content moderation and platform governance.

Specialized Review Teams and Expertise

We maintain specialized review teams for different types of content and policy areas, including teams focused on harassment and abuse, violent extremism, misinformation, child safety, and intellectual property. These teams include experts with relevant backgrounds, such as former law enforcement officers, mental health professionals, academics, and civil rights advocates.

Our specialized teams receive ongoing training and support to help them deal with the psychological challenges of reviewing harmful content. We provide mental health resources, regular rotation opportunities, and other support services to ensure the wellbeing of our moderation teams while maintaining the quality and consistency of our content review processes.

Copyright © 2025 Digizenship.

Contact Information

Digizenship Ltd, 86-90 Paul Street, London, England, EC2A 4NE

support@digizenship.com

Home

Get help

Our Policies

Copyright © 2025 Digizenship.

Contact Information

Digizenship Ltd, 86-90 Paul Street, London, England, EC2A 4NE

support@digizenship.com

Our Policies

/

Moderation Policy

Moderation Policy

Content Review Framework and Principles

Our content moderation approach balances the protection of user safety with respect for freedom of expression and diverse viewpoints. We recognize that content moderation involves complex judgments about context, intent, and cultural differences, and we strive to make decisions that are fair, consistent, and transparent. Our moderation framework is built on the principle that users should be able to express themselves authentically while being protected from content that could cause harm.

We employ a combination of automated systems and human reviewers to moderate content at scale while maintaining accuracy and nuance in our decisions. Our automated systems are designed to detect potentially violating content and prioritize it for human review, rather than making final moderation decisions without human oversight. Human moderators receive extensive training on our policies, cultural sensitivity, and the importance of considering context when making moderation decisions.

Automated Detection and Human Review Process

Our automated content moderation systems use advanced machine learning algorithms trained on millions of examples of policy-violating content. These systems can detect various types of harmful content, including hate speech, harassment, violent content, and spam, with increasing accuracy and speed. However, we recognize that automated systems have limitations, particularly in understanding context, sarcasm, and cultural nuances.

All automated moderation decisions are subject to human review, either proactively or upon user appeal. Our human moderators are trained specialists who understand the cultural and contextual factors that automated systems may miss. They work in teams that include native speakers of various languages and experts in different cultural contexts to ensure that moderation decisions are appropriate across our global user base.

Enforcement Actions and Graduated Response

Our enforcement approach is designed to be educational and rehabilitative rather than purely punitive. For first-time or minor violations, we may issue warnings, provide educational resources, or temporarily limit certain account features while allowing the user to continue using our platform. This graduated approach recognizes that many policy violations result from misunderstanding rather than malicious intent.

For more serious violations or repeat offenses, we may implement temporary suspensions of varying lengths, depending on the severity of the violation and the user's history on our platform. Permanent bans are reserved for the most serious violations, such as content that promotes violence, exploits children, or represents coordinated attempts to manipulate our platform. We maintain detailed records of all enforcement actions to ensure consistency and to track patterns of behavior.

Appeals Process and Independent Review

We believe that users should have the right to challenge our moderation decisions, and we have established a comprehensive appeals process that includes multiple levels of review. Users can appeal any moderation action through our platform, and appeals are reviewed by moderators who were not involved in the original decision. For complex cases or significant enforcement actions, appeals may be reviewed by senior moderation specialists or subject matter experts.

We are committed to establishing an independent oversight board that includes external experts in human rights, free expression, and digital governance to review certain types of moderation decisions and provide recommendations for policy improvements. This board will have the authority to overturn our moderation decisions in certain cases and will publish regular reports on content moderation trends and challenges.

Cultural Sensitivity and Global Considerations

Content moderation must account for cultural differences, local laws, and varying social norms across the diverse communities that use our platform. We employ moderators from different cultural backgrounds and provide specialized training on cultural sensitivity and local context. Our policies are designed to protect users from harm while respecting cultural differences in communication styles and social norms.

We recognize that certain content may be acceptable in some cultural contexts but harmful in others, and we strive to make moderation decisions that consider the intended audience and cultural context of the content. We also work with local experts and community leaders to understand cultural nuances and ensure that our moderation practices are appropriate for different regions and communities.

Transparency and Accountability Measures

We publish regular transparency reports that provide detailed statistics on our content moderation activities, including the volume and types of content removed, the number of accounts suspended or banned, and the outcomes of appeals. These reports also include information about government requests for content removal or user data, helping users understand how external pressures may affect content moderation decisions.

We are committed to ongoing dialogue with users, civil society organizations, and policy experts about our moderation practices and policies. We regularly solicit feedback through public consultations, academic partnerships, and community forums, and we use this feedback to improve our policies and practices. We also participate in industry initiatives to develop best practices for content moderation and platform governance.

Specialized Review Teams and Expertise

We maintain specialized review teams for different types of content and policy areas, including teams focused on harassment and abuse, violent extremism, misinformation, child safety, and intellectual property. These teams include experts with relevant backgrounds, such as former law enforcement officers, mental health professionals, academics, and civil rights advocates.

Our specialized teams receive ongoing training and support to help them deal with the psychological challenges of reviewing harmful content. We provide mental health resources, regular rotation opportunities, and other support services to ensure the wellbeing of our moderation teams while maintaining the quality and consistency of our content review processes.