Our Policies

/

Community Guidelines

Community Guidelines

Creating a Safe and Inclusive Environment

Our community guidelines are designed to foster a safe, respectful, and inclusive environment where all users can express themselves authentically while treating others with dignity and respect. We believe in the power of diverse voices and perspectives, and we're committed to protecting users from harm while preserving their right to free expression. These guidelines apply to all content shared on our platform, including posts, comments, messages, profiles, and any other form of communication.

We recognize that context matters in communication, and our content moderation takes into account cultural differences, current events, and the intent behind content. However, certain types of harmful content are prohibited regardless of context, including content that promotes violence, harassment, or discrimination against individuals or groups based on protected characteristics such as race, ethnicity, religion, gender, sexual orientation, or disability.

Prohibited Content and Behavior

We prohibit content that promotes, encourages, or facilitates violence or harm against individuals or groups. This includes threats of violence, promotion of terrorism, glorification of violent events, and content that could lead to imminent harm. We also prohibit harassment and bullying, including targeted harassment campaigns, doxxing (sharing private information without consent), and persistent unwanted contact that is intended to intimidate or harm.

Hate speech that attacks or dehumanizes individuals or groups based on protected characteristics is not allowed on our platform. This includes slurs, derogatory stereotypes, and content that promotes discrimination or segregation. We recognize that humor, satire, and commentary on public issues are important forms of expression, but we draw the line at content that crosses into harassment or promotes hatred toward vulnerable groups.

Content that depicts or promotes self-harm, suicide, or eating disorders is prohibited, as such content can be harmful to vulnerable users. We provide resources for users who may be struggling with these issues and work with mental health organizations to ensure appropriate support is available. We also prohibit content that exploits minors, including any sexual or suggestive content involving anyone under 18.

Spam and Inauthentic Behavior

We prohibit spam, which includes repetitive, unsolicited, or irrelevant content that disrupts the user experience. This encompasses promotional content posted excessively, chain letters, and coordinated inauthentic behavior designed to manipulate our platform or mislead users. We use both automated systems and human review to detect and remove spam.

Fake accounts and impersonation are not allowed on our platform. Users must represent themselves authentically and may not impersonate others, including public figures, brands, or organizations. We verify the identity of public figures and high-profile accounts to help users distinguish authentic accounts from impersonators. Users who create multiple accounts to evade bans or manipulate our systems will have all associated accounts suspended.

Intellectual Property and Copyright

Users must respect the intellectual property rights of others and may not post content that infringes on copyrights, trademarks, or other intellectual property rights. We comply with the Digital Millennium Copyright Act (DMCA) and provide a process for copyright holders to report infringement. Users who repeatedly violate intellectual property rights may have their accounts suspended or terminated.

We encourage users to create original content and to properly attribute content from other sources. When sharing content created by others, users should obtain permission when required and provide appropriate credit. Fair use principles may apply in certain circumstances, but users are responsible for ensuring their use of copyrighted material is legally permitted.

Misinformation and Harmful False Information

We are committed to reducing the spread of misinformation that could cause harm to individuals or society. This includes false information about public health, elections, natural disasters, and other topics where misinformation could lead to real-world harm. We work with fact-checking organizations and other partners to identify and address misinformation.

When we identify potentially false information, we may reduce its distribution, add warning labels, or direct users to authoritative sources. We prioritize addressing misinformation that could lead to imminent harm, such as false medical advice or false information about emergencies. We also provide users with tools to report suspected misinformation and educate them about how to identify reliable sources.

Reporting and Enforcement

We provide multiple ways for users to report content that violates our community guidelines, including in-app reporting tools, email support, and specialized reporting channels for urgent safety issues. We take all reports seriously and review them as quickly as possible, with priority given to reports involving safety risks or illegal content.

Our enforcement actions are designed to be proportional to the severity of the violation and may include content removal, account warnings, temporary suspensions, or permanent bans. We consider factors such as the nature of the violation, the user's history on our platform, and the potential for harm when determining appropriate enforcement actions. Users have the right to appeal our moderation decisions, and we provide a clear appeals process with human review.

Supporting Vulnerable Users

We recognize that certain users may be more vulnerable to harm, including minors, victims of abuse, and members of marginalized communities. We provide additional protections for these users, including specialized reporting mechanisms, priority review of reports involving vulnerable users, and partnerships with organizations that provide support services.

We also provide resources and support for users who may be experiencing mental health issues, domestic violence, or other crises. Our platform includes links to helplines, support organizations, and educational resources, and we train our content moderation teams to recognize and appropriately respond to users in crisis.

Copyright © 2025 Digizenship.

Contact Information

Digizenship Ltd, 86-90 Paul Street, London, England, EC2A 4NE

support@digizenship.com

Home

Get help

Our Policies

Copyright © 2025 Digizenship.

Contact Information

Digizenship Ltd, 86-90 Paul Street, London, England, EC2A 4NE

support@digizenship.com

Our Policies

/

Community Guidelines

Community Guidelines

Creating a Safe and Inclusive Environment

Our community guidelines are designed to foster a safe, respectful, and inclusive environment where all users can express themselves authentically while treating others with dignity and respect. We believe in the power of diverse voices and perspectives, and we're committed to protecting users from harm while preserving their right to free expression. These guidelines apply to all content shared on our platform, including posts, comments, messages, profiles, and any other form of communication.

We recognize that context matters in communication, and our content moderation takes into account cultural differences, current events, and the intent behind content. However, certain types of harmful content are prohibited regardless of context, including content that promotes violence, harassment, or discrimination against individuals or groups based on protected characteristics such as race, ethnicity, religion, gender, sexual orientation, or disability.

Prohibited Content and Behavior

We prohibit content that promotes, encourages, or facilitates violence or harm against individuals or groups. This includes threats of violence, promotion of terrorism, glorification of violent events, and content that could lead to imminent harm. We also prohibit harassment and bullying, including targeted harassment campaigns, doxxing (sharing private information without consent), and persistent unwanted contact that is intended to intimidate or harm.

Hate speech that attacks or dehumanizes individuals or groups based on protected characteristics is not allowed on our platform. This includes slurs, derogatory stereotypes, and content that promotes discrimination or segregation. We recognize that humor, satire, and commentary on public issues are important forms of expression, but we draw the line at content that crosses into harassment or promotes hatred toward vulnerable groups.

Content that depicts or promotes self-harm, suicide, or eating disorders is prohibited, as such content can be harmful to vulnerable users. We provide resources for users who may be struggling with these issues and work with mental health organizations to ensure appropriate support is available. We also prohibit content that exploits minors, including any sexual or suggestive content involving anyone under 18.

Spam and Inauthentic Behavior

We prohibit spam, which includes repetitive, unsolicited, or irrelevant content that disrupts the user experience. This encompasses promotional content posted excessively, chain letters, and coordinated inauthentic behavior designed to manipulate our platform or mislead users. We use both automated systems and human review to detect and remove spam.

Fake accounts and impersonation are not allowed on our platform. Users must represent themselves authentically and may not impersonate others, including public figures, brands, or organizations. We verify the identity of public figures and high-profile accounts to help users distinguish authentic accounts from impersonators. Users who create multiple accounts to evade bans or manipulate our systems will have all associated accounts suspended.

Intellectual Property and Copyright

Users must respect the intellectual property rights of others and may not post content that infringes on copyrights, trademarks, or other intellectual property rights. We comply with the Digital Millennium Copyright Act (DMCA) and provide a process for copyright holders to report infringement. Users who repeatedly violate intellectual property rights may have their accounts suspended or terminated.

We encourage users to create original content and to properly attribute content from other sources. When sharing content created by others, users should obtain permission when required and provide appropriate credit. Fair use principles may apply in certain circumstances, but users are responsible for ensuring their use of copyrighted material is legally permitted.

Misinformation and Harmful False Information

We are committed to reducing the spread of misinformation that could cause harm to individuals or society. This includes false information about public health, elections, natural disasters, and other topics where misinformation could lead to real-world harm. We work with fact-checking organizations and other partners to identify and address misinformation.

When we identify potentially false information, we may reduce its distribution, add warning labels, or direct users to authoritative sources. We prioritize addressing misinformation that could lead to imminent harm, such as false medical advice or false information about emergencies. We also provide users with tools to report suspected misinformation and educate them about how to identify reliable sources.

Reporting and Enforcement

We provide multiple ways for users to report content that violates our community guidelines, including in-app reporting tools, email support, and specialized reporting channels for urgent safety issues. We take all reports seriously and review them as quickly as possible, with priority given to reports involving safety risks or illegal content.

Our enforcement actions are designed to be proportional to the severity of the violation and may include content removal, account warnings, temporary suspensions, or permanent bans. We consider factors such as the nature of the violation, the user's history on our platform, and the potential for harm when determining appropriate enforcement actions. Users have the right to appeal our moderation decisions, and we provide a clear appeals process with human review.

Supporting Vulnerable Users

We recognize that certain users may be more vulnerable to harm, including minors, victims of abuse, and members of marginalized communities. We provide additional protections for these users, including specialized reporting mechanisms, priority review of reports involving vulnerable users, and partnerships with organizations that provide support services.

We also provide resources and support for users who may be experiencing mental health issues, domestic violence, or other crises. Our platform includes links to helplines, support organizations, and educational resources, and we train our content moderation teams to recognize and appropriately respond to users in crisis.