Community Guidelines

The rules and expectations for participating in the Fluxer community. Help us keep Fluxer safe and welcoming.

Last updated March 10, 2026

Effective date: 2026-03-10

Our commitment to you

Fluxer exists to help people communicate, connect, and build communities. These guidelines describe the standards we hold ourselves and every user to. They form part of our Terms of Service, and violations may result in enforcement action as described in the Enforcement section below.

We've written these guidelines to be specific rather than vague. Vague rules lead to inconsistent enforcement and erode trust, so where a rule could be read more than one way, we've tried to resolve that here rather than leaving it to individual moderators. Where context matters, we explain how we weigh it.

These guidelines apply to every user, in every space on Fluxer, without exception. That includes direct messages, Community channels, voice and video chats, user profiles, statuses, custom emojis, usernames, bios, and any other area where users interact or share content.

Individual Communities may adopt additional rules that are stricter than these guidelines, but never more permissive. Where there's a conflict, these guidelines and our Terms of Service take precedence.

The golden rule

Treat others with respect and consideration. Behind every username is a real person who deserves basic dignity.

If you wouldn't want something done or said to you, don't do or say it to someone else. When in doubt, choose kindness.

Building a good community

We want Fluxer to be safe, welcoming, and constructive for everyone, including people from marginalised and underrepresented communities who are often made to feel unwelcome on other platforms. Here's how you can help.

Assume good intent. When something is unclear, ask for clarification before reacting. Misunderstandings happen, especially across languages and cultures.

Respect identity. Use the names, pronouns, and terms that people use for themselves. If you're unsure of someone's pronouns, it's always fine to ask respectfully or to use their username or display name.

Use content warnings and age markings. If you discuss potentially distressing, graphic, or adult topics, label them clearly and place them in age-appropriate spaces.

Set clear community rules. If you run a Community, make your rules clear, accessible, and easy to find. Make sure they align with these guidelines and local law, and enforce them fairly and consistently.

Help new members. Help new members understand Community rules. Avoid dogpiling or retaliation and use moderation tools instead.

Protect privacy. Share only what's necessary and be cautious about exposing personal information, yours or anyone else's.

Disagree constructively. Challenge ideas, not people. Disagreement is healthy; personal attacks, harassment, and demeaning behaviour are not.

Report, don't retaliate. If you see behaviour that's dangerous, abusive, or clearly violates these guidelines, report it rather than amplify it. Don't engage in "vigilante justice."

Prohibited conduct

The following sections describe conduct that is strictly prohibited on Fluxer. Each section explains what is prohibited, gives concrete examples, identifies relevant exceptions, and describes how we assess borderline cases.

1. Harassment and bullying

You must not engage in harassment, bullying, or threatening behaviour towards any person or group.

1.1 Sustained or targeted harassment. Repeated hostile, degrading, or intimidating behaviour directed at a specific person or group, including following someone across Communities or channels to continue unwanted interactions.

1.2 Threats. Direct or implied threats of violence, harm, or other adverse action against any person, including conditional threats ("if you don't do X, I'll do Y").

1.3 Doxxing. Sharing or threatening to share someone's personal or identifying information without their explicit consent. This covers real names, home addresses, workplaces, phone numbers, email addresses, financial information, government-issued identification, and any information that could be used to locate, contact, or identify someone against their will. This applies regardless of whether the information is technically "public." Aggregating and weaponising public information is still doxxing.

1.4 Unwanted contact. Continuing to contact, message, or interact with someone after they've clearly asked you to stop, or after you've been blocked or removed.

1.5 Sexual harassment. Unwanted sexual comments, advances, innuendo, requests for sexual content, or sexually explicit messages directed at someone who hasn't consented to receive them.

1.6 Pile-ons and coordinated attacks. Organising, encouraging, or participating in coordinated attacks, pile-ons, or mass harassment against an individual or group, whether on Fluxer or by directing others to harass someone on another platform.

1.7 Encouraging harm. Encouraging, inciting, or instructing others to engage in harassment or harmful behaviour toward a specific person or group.

How we assess harassment cases. We consider the frequency and duration of the behaviour, the severity and nature of the conduct, the power dynamics between the parties (for example, a Community Owner targeting a new member), whether the target asked the person to stop, whether the behaviour forms part of a pattern, and the impact on the target's ability to use the platform safely.

2. Hate speech and discrimination

You must not attack, demean, dehumanise, or incite hatred or violence against people based on protected characteristics.

Protected characteristics. The following characteristics are explicitly protected on Fluxer: race, ethnicity, colour, national origin, or ancestry; immigration or citizenship status; caste; religion, faith, or lack thereof; sex; gender, gender identity, or gender expression; sexual orientation; sex characteristics, including intersex status; disability, chronic illness, or medical condition; neurodivergence; age or generational status; pregnancy or parental status; veteran or military status; socioeconomic status or housing status; and physical appearance, including body size.

This list is intentionally broad. We may also protect other characteristics where the context makes clear that someone is being targeted for who they are.

How we categorise hate speech. We sort hate speech into three tiers based on severity, and each tier carries a different enforcement response.

Tier 1 — Dehumanisation and incitement (most severe). This covers content that dehumanises people (comparing them to animals, insects, diseases, filth, subhuman entities, or objects), calls for violence, killing, or physical harm against a protected group, calls for exclusion, segregation, or denial of fundamental rights, or denies or celebrates well-documented atrocities or genocides targeting a protected group. Tier 1 content is removed immediately and typically results in account suspension or termination.

Tier 2 — Statements of inferiority, contempt, and stereotyping. This covers content that asserts members of a protected group are inherently inferior, less intelligent, morally deficient, or otherwise lesser; that generalises negative stereotypes as inherent traits of a group; that expresses contempt, disgust, or hatred toward a group as a whole; or that uses imagery, memes, or symbols historically associated with hatred of a protected group in a celebratory or affirming way. Tier 2 content is removed and typically results in a warning for first-time violations, escalating for repeat offences.

Tier 3 — Slurs, exclusion, and demeaning language. This covers content that uses slurs or derogatory terms targeting protected groups, calls for exclusion from Communities based on protected characteristics (unless the Community's purpose requires it, for example a women's support group may limit membership), or mocks, ridicules, or demeans someone specifically because of a protected characteristic. Tier 3 content is assessed contextually. Self-referential use of reclaimed language by members of the relevant group is generally permitted (see Exceptions below).

2.1 LGBTQ+ specific protections. Fluxer is committed to being a safe and affirming platform for lesbian, gay, bisexual, transgender, queer, intersex, asexual, and all other gender and sexual minority (LGBTQ+) users. The following are specifically prohibited as forms of hate speech.

Targeted misgendering and deadnaming. Deliberately and repeatedly referring to a person by a gender, name, or pronouns that don't align with their gender identity, after being informed of or having reasonable access to their correct name or pronouns. This includes using someone's birth name ("deadname") against their wishes to harass, demean, or invalidate their identity. To be clear: this rule is about deliberate, targeted behaviour intended to cause harm. Honest mistakes happen. If you accidentally use the wrong name or pronoun and correct yourself when informed, that's not a violation. The focus is on persistent, intentional refusal to respect someone's identity.

Denial of identity. Content that denies the existence or validity of transgender, nonbinary, intersex, or other gender identities, or that denies the existence or validity of sexual orientations, when directed at or about specific individuals or used to advocate for discrimination. This includes claims that being transgender is a mental illness, a delusion, or a choice that can be "cured."

Conversion therapy advocacy. Promoting, advertising, or providing instructions for conversion therapy or any programme, practice, or intervention that attempts to change a person's sexual orientation, gender identity, or gender expression. This prohibition extends to content that frames conversion practices as legitimate medical treatment, spiritual guidance, or parental responsibility.

Sexualisation and fetishisation. Reducing LGBTQ+ people to their sexual orientation or gender identity in a degrading or objectifying way, or treating LGBTQ+ identities as inherently sexual, deviant, or predatory.

Outing. Revealing or threatening to reveal someone's sexual orientation, gender identity, or intersex status without their explicit consent.

2.2 Protecting gender-affirming healthcare discussions. Fluxer recognises that access to gender-affirming healthcare matters greatly to transgender, nonbinary, and intersex people. We draw a clear line between protected discussion and prohibited content.

Content that is allowed includes sharing personal experiences with gender-affirming care (including hormone therapy, surgery, and other treatments), providing peer support, sharing resources, and discussing healthcare options. Sharing medical information consistent with the consensus of major medical organisations (such as the World Health Organisation, the American Medical Association, the Endocrine Society, and the World Professional Association for Transgender Health) is also allowed, as is advocating for healthcare access, insurance coverage, or policy changes, coming-out discussions and identity exploration, and discussing experiences of detransition in a personal, supportive, or informational context.

Content that is not allowed includes promoting conversion therapy or practices designed to change someone's sexual orientation or gender identity, deliberately spreading medical misinformation that contradicts established scientific and medical consensus in order to deny transgender identities or discourage evidence-based care (for example, falsely claiming that all gender-affirming care is experimental or has been "debunked"), using concern about healthcare as a pretext to deny, mock, or undermine transgender identities, and targeting individuals who have shared their healthcare experiences with harassment or ridicule.

How we assess borderline cases. Good-faith discussion of healthcare policy, medical research, individual experiences (including critical ones), and evolving scientific understanding is permitted. We distinguish between genuine engagement with complex topics and bad-faith efforts to delegitimise transgender people or deny them healthcare. When assessing content in this area, we look at whether the content engages with evidence and arguments in good faith, whether it targets specific individuals or communities with hostility, whether it uses medical or scientific framing as a pretext for harassment or identity denial, and the overall context, including the Community where it was posted, the user's history, and the discussion thread.

2.3 Exceptions to hate speech rules. The following are generally not considered violations.

Self-referential use of reclaimed language. Members of a group may use reclaimed terms (including slurs) to refer to themselves or within their community. We assess this contextually: use of reclaimed language in a space primarily composed of that community is treated differently from use directed at strangers.

Academic, educational, and documentary content. Discussion of hate speech, discrimination, and historical atrocities in academic, educational, journalistic, or documentary contexts is permitted when the purpose is to inform, educate, analyse, or condemn. The content must not itself promote hatred, and should include appropriate framing and context.

Counter-speech. Calling out, criticising, or arguing against hateful content or ideologies is protected speech. Quoting hateful content in order to condemn it is not itself a violation.

Satire and commentary. Clearly satirical content that critiques power structures, ideologies, or prejudice may be permitted. Satire that targets marginalised groups rather than critiquing prejudice against them is not protected by this exception.

3. Violence and graphic content

You must not share or promote real-world graphic depictions of violence, gore, mutilation, or animal cruelty (including photographs, videos, or realistic recordings), content that promotes, encourages, glorifies, or provides instructions for self-harm, suicide, or harm to others, detailed instructions or encouragement for violence or illegal activities, or content that glorifies, celebrates, or promotes violence, violent extremism, or terrorism.

Scope. This restriction targets real-world media. Media presented as real-world, even if generated, edited, or manipulated, is treated the same as actual real-world footage. Fictional or artistic depictions of violence (for example, drawings, animation, game content, or horror) are permitted in age-gated spaces with clear content warnings, provided they aren't presented as real-world footage, aren't used to glorify real-world violence or target a specific person, and aren't so extreme as to have no purpose other than shock.

Contextual allowances. Non-graphic discussion of difficult topics is permitted in appropriate contexts (for example, news, educational content, or historical analysis). Such content must include clear content warnings, be restricted to age-gated spaces when likely to be distressing, and must not glorify or encourage the violence being discussed.

3a. Terrorism and violent extremism

Fluxer must not be used to promote, support, recruit for, or coordinate terrorism or violent extremism. This covers content that recruits for, incites, or provides material support to terrorist organisations or violent extremist movements; propaganda, manifestos, or instructional materials produced by or on behalf of designated terrorist organisations; glorification or celebration of terrorist attacks, mass violence, or their perpetrators; and coordination, planning, or operational activity related to acts of terrorism or violent extremism.

EU Terrorism Content Online Regulation. Where we receive a removal order from a competent authority under Regulation (EU) 2021/784, we'll remove or disable access to the identified content within one hour of receiving the order, as required. Content that falls under this section may also be reported to relevant law enforcement authorities where required or permitted by law. We preserve removed content for six months for law enforcement purposes as the regulation requires.

Exceptions. Legitimate news reporting, academic research, counter-extremism education, historical analysis, and artistic expression are not prohibited, provided they don't themselves glorify or promote the acts described above.

4. Sexual content and protection of minors

Zero-tolerance framework. We have absolute zero tolerance for child sexual exploitation in any form. The rules in this section are among the most strictly enforced on the platform.

4.1 Child sexual abuse material (CSAM). CSAM — sexual or sexually suggestive imagery depicting real children — is strictly prohibited and will be immediately reported to law enforcement as required by law. We use automated tools to detect and prevent CSAM in uploaded media (see our Privacy Policy for details). This prohibition includes realistic AI-generated or digitally manipulated imagery that is indistinguishable from photographs of real children. Violations result in immediate and permanent account termination with no appeal, and reporting to the National Center for Missing & Exploited Children (NCMEC) and/or relevant authorities.

4.2 Sexualisation of real minors. No user may share, distribute, request, or create sexual or sexually suggestive content depicting a real, identified minor in any medium, including text, imagery, audio, or AI-generated content. This applies regardless of the relationship between the user and the minor.

4.3 Fictional depictions of minors. Sexual or sexually suggestive content featuring fictional characters who are explicitly described as minors, or who are unambiguously depicted as prepubescent, is prohibited in all spaces, without exception. This includes drawn, animated, AI-generated, and written content where the character is clearly a child. We assess fictional content based on the totality of context: stated age, narrative framing, visual presentation, and the setting in which the character appears. This rule doesn't apply to non-sexual coming-of-age narratives, survivor stories, educational content, or literary works that depict difficult subject matter without sexualising it.

4.4 Grooming. Using the platform to build a relationship with a minor for the purpose of sexual exploitation is strictly prohibited, regardless of whether explicit content is involved. Grooming behaviours include building inappropriate emotional intimacy with a minor, attempting to isolate a minor from trusted adults or support systems, gradually introducing sexual topics or content to a minor, requesting personal information, photos, or private communication from a minor in a sexualised context, and offering gifts, money, or special treatment to a minor in exchange for personal information or intimate interaction.

4.5 Users under 18. If you're under 18, you must not engage with, share, or distribute any sexual or sexually suggestive content on the platform, including in age-gated spaces.

4.6 Adult content. Sexual and explicit content involving adults is permitted only in clearly marked 18+ spaces. Communities must apply an age restriction to the Community as a whole, to individual channels, or both. We may restrict or remove Communities that fail to enforce these requirements. Community Owners are responsible for making sure age gating is properly applied.

4.7 Non-consensual intimate media. Sharing intimate images, videos, or recordings of any person without their explicit consent is strictly prohibited. This includes "deepfakes" and AI-generated or digitally manipulated content that depicts someone in an intimate context without their permission, "revenge porn" and sexually explicit content shared to shame, coerce, or harm someone, voyeuristic content captured without the subject's knowledge or consent, and threatening to share intimate content to coerce, blackmail, or intimidate.

4.8 Sexual exploitation. Using the platform to facilitate sexual exploitation of any person, including sex trafficking, coerced sexual labour, or commercial sexual exploitation of minors, is strictly prohibited and will be reported to law enforcement.

5. Illegal activities

You must not use Fluxer to facilitate, promote, or engage in illegal activities. This includes distribution or promotion of malware, viruses, or harmful software; fraud, scams, or deceptive practices (including phishing, impersonation, and financial scams); sale, distribution, or promotion of illegal goods, services, or controlled substances; copyright infringement or other intellectual property violations at scale or in a clearly abusive manner; hacking, unauthorised access, or cyberattacks; money laundering, terrorist financing, or similar financial crimes; evasion of lawful restrictions or sanctions; and any other activity that violates applicable law.

We may cooperate with law enforcement where required by law or where we believe it's necessary to protect individuals from serious harm.

6. Spam and platform abuse

You must not abuse or misuse the platform. This includes sending spam, bulk messages, or unsolicited commercial content; creating fake accounts or impersonating individuals or entities; artificially inflating Community member counts or reactions; buying, selling, renting, or trading Fluxer accounts or Communities; abusing our free tier as unlimited cloud storage rather than for legitimate communication; initiating fraudulent chargebacks or payment disputes; and using automation, bots, scrapers, or scripts to evade limits, scrape or harvest data, mass-create accounts, or disrupt normal user experiences.

Limited automation that complies with our policies and applicable law may be allowed where explicitly permitted by Fluxer. All other automated abuse is prohibited.

7. Harmful misinformation

You must not deliberately spread misinformation that is demonstrably false and likely to cause serious harm. This covers misinformation that could endanger public health or safety (for example, false medical "cures" that could lead someone to skip necessary treatment), interfere with democratic processes or civic participation (for example, fabricated election procedures intended to confuse or disenfranchise voters), cause direct physical harm to individuals or communities, or damage critical infrastructure or essential services.

How we assess misinformation. We look at the factual accuracy of the claim, the potential for real-world harm, the intent and context (for example, satire versus deliberate deception), the source credibility, and whether the content was shared in a context where others might reasonably act on it.

What this rule doesn't cover. Personal opinions, political commentary, good-faith debate, satire, speculation clearly labelled as such, and discussion of contested or emerging scientific topics are not prohibited. We don't police opinions or enforce a single viewpoint. This rule targets deliberate falsehoods with the potential for serious, concrete harm, not disagreement, dissent, or unpopular views.

Relationship to gender-affirming healthcare. To be explicit: sharing medical information about gender-affirming care that aligns with established medical consensus is not misinformation. Advocacy for access to gender-affirming healthcare is not misinformation. Personal accounts of gender-affirming care experiences are not misinformation. See Section 2.2 for our full policy on this topic.

8. Privacy violations

You must not violate the privacy rights of other users. This includes doxxing (addressed in Section 1.3), recording voice or video communications without the consent of all participants where legally required, circumventing, bypassing, or trying to defeat privacy settings, user blocks, or safety features, engaging in stalking, surveillance, or invasive monitoring of users on or off the platform in connection with their use of Fluxer, and sharing screenshots or recordings of private conversations without the consent of all parties, where doing so could cause harm or was done with the intent to harass.

When in doubt about whether something violates someone's privacy, err on the side of caution and don't share it.

9. Deceptive AI-generated and manipulated content

You must not use AI-generated or digitally manipulated content to deceive, defraud, or harm others. This covers deepfakes or synthetic media depicting real people in situations they didn't participate in, without their consent; AI-generated content designed to impersonate a real person for fraud, harassment, or manipulation; synthetic or manipulated media presented as authentic evidence of events that didn't occur; and using AI-generated content to get around other rules in these guidelines (for example, generating hateful imagery, CSAM, or misinformation).

What is permitted. AI-generated creative, artistic, satirical, or clearly fictional content is permitted when it isn't used to deceive, harass, or target individuals, and doesn't violate other rules. Where AI-generated content could reasonably be mistaken for authentic material, we strongly encourage users to label it as AI-generated or manipulated. Unlabelled AI content that causes harm or confusion may be treated more seriously.

Reporting violations

If you see content or behaviour that appears to violate these guidelines or our Terms of Service, please report it. You can use the in-app reporting features available throughout the platform, or email our safety team at safety@fluxer.app.

When reporting, please include relevant screenshots or message excerpts, direct links to the content, message, or Community, user IDs or usernames, and a brief description of what's happening and why it concerns you.

Don't engage in vigilante justice. Don't harass, threaten, or doxx someone in response to their violations. Report the issue to us and let our moderation team handle it. Retaliatory harassment is itself a violation of these guidelines, even if directed at someone who broke the rules first.

We may not always be able to share the outcome of our review with you, but we review all reports in good faith. Reports involving imminent danger to life, child sexual exploitation, or credible threats of serious violence are treated as highest priority.

Trusted flaggers

We give priority to reports submitted by entities designated as trusted flaggers under Article 22 of the EU Digital Services Act. Trusted flagger reports are processed and decided on with priority. If you're a designated trusted flagger, please contact legal@fluxer.app.

Enforcement

What actions we may take

When we identify violations, we may take one or more of the following actions depending on severity, context, and risk: issuing informal or formal warnings, removing or restricting access to violating content, temporarily limiting or disabling specific features (for example, messaging or community creation), temporarily suspending account access, permanently banning accounts, restricting the ability to create, own, or manage Communities, deleting Communities that repeatedly or seriously violate these guidelines, restricting access to cosmetic items, premium services, or subscriptions, and reporting illegal content or serious threats to law enforcement or relevant organisations.

How we decide

When deciding on enforcement, we consider the severity of the violation and the actual or potential harm, the intent of the user (malicious, negligent, or accidental), the user's prior history of violations or warnings, the risk of future harm if no action is taken, whether the content affects minors or vulnerable individuals, whether applicable law requires us to act in a particular way, and any mitigating context (for example, a genuine misunderstanding, immediate self-correction, or cooperation).

As a general principle, we start with less severe measures (warnings, content removal, temporary restrictions) for minor or first-time violations. We escalate for repeat offences or failure to comply with warnings. We may act immediately and permanently for egregious violations, including child sexual exploitation or CSAM, credible threats of serious violence, large-scale or clearly malicious abuse, fraud, or hacking, and terrorism or violent extremism content.

Automated and human moderation. Automated tools may flag content or behaviour for review, but enforcement decisions are made by humans, with a few limited exceptions. Known CSAM hashes are automatically and immediately blocked during upload (content is rejected and never delivered). Automated spam and abuse defences may temporarily block actions pending review. Regional access restrictions based on IP geolocation operate automatically as described in our Privacy Policy.

Statement of reasons

When we take enforcement action, we'll give you a clear and specific statement of reasons that includes the specific guideline, term, or legal ground for the action, the facts and circumstances we relied on (including relevant content or identifiers where appropriate), whether the decision was made or assisted by automated means, and information about redress options, including how to appeal and, for users in the EU, the option to refer disputes to a certified out-of-court dispute settlement body.

This applies to all enforcement actions except where providing such information would compromise an investigation, endanger safety, or conflict with legal obligations.

Appeals

If you believe we got it wrong, you can appeal.

How to appeal. Send an email to appeals@fluxer.app from the email address associated with your Fluxer account. State clearly which enforcement action you're appealing (for example, "7-day suspension on [date]" or "Community deletion"), explain why you believe the decision was incorrect, incomplete, or disproportionate, and include any relevant context or evidence.

Process. We can only process appeals submitted from the email associated with the affected account. Please submit one appeal per enforcement action; multiple submissions about the same decision won't speed things up. Submit your appeal within 60 days of receiving the enforcement notice. Temporary enforcement actions generally stay in place during review. We aim to respond promptly, though response times vary depending on volume and complexity.

After review, our decision on the appeal is generally final. However, we may revisit past decisions if new, material information comes to light or if we update our policies in relevant ways. If a complaint shows that the content wasn't illegal and didn't violate these guidelines or our terms, we'll reverse our decision without undue delay.

Out-of-court dispute settlement (EU)

If you're in the European Union and aren't satisfied with the outcome of our appeals process, you can refer the dispute to a certified out-of-court dispute settlement body under Article 21 of the EU Digital Services Act. You can find a list of certified bodies through the Digital Services Coordinator in your Member State. We'll engage in good faith with any certified body you select.

Special considerations

For younger users

Users must meet the Minimum Age to use Fluxer as described in our Terms of Service and Privacy Policy. This is typically 13 but may be higher in some countries.

We may turn on enhanced safety features by default for users we identify as under 18, such as stricter privacy defaults or restricted access to certain features. Certain content or Communities may be restricted based on age. Communities focused on dating or romantic relationships between minors, or that sexualise minors in any way, are strictly prohibited. We keep a close eye on the safety of underage users.

If you're under 18, please be especially careful about sharing personal information, and don't meet people from Fluxer in person without involving a trusted adult.

For Community Owners

If you own, create, or administer a Community, you're responsible for the content and behaviour within it, including user-generated content and moderation practices. Use the tools available to keep your Community safe, including moderation roles, content controls, and age gates. Set clear, visible rules that align with these guidelines and enforce them fairly. You can set stricter rules, but never more permissive ones.

Failure to moderate or address serious, repeated violations can result in restrictions on your Community, removal of your Community, or enforcement action against your account. If you're unsure how to handle a safety issue, report it to us or contact safety@fluxer.app.

For parents and guardians

We provide safety resources and guidance on our website to help parents and guardians understand Fluxer and support young users. If you have concerns about your teenager's account, get in touch with our support team. We may need to verify your relationship before discussing a specific account. If you believe a child is in immediate danger, contact local emergency services first, then let us know.

Self-harm and crisis content

We care about the wellbeing of every person on Fluxer. Our approach to self-harm content comes from a place of compassion, not punishment.

What's prohibited. You must not glorify, encourage, promote, or provide specific instructions or methods for self-harm, suicide, or eating disorders. You must not pressure or dare anyone to harm themselves, create or participate in content that gamifies or challenges self-harm, or share graphic imagery of self-harm.

What's allowed. Supportive, empathetic conversations about mental health are welcome. This includes sharing personal experiences with mental health challenges in a supportive context, seeking or offering emotional support, discussing recovery and coping strategies, and sharing information about professional resources.

Our approach. We don't punish people for saying they're struggling. If we identify content suggesting someone may be at imminent risk, our priority is connecting them with support. We may display interstitial screens linking to crisis resources, place content warnings on distressing messages, and in urgent cases, take action to help ensure the person's safety.

If you see someone in crisis, report the content via in-app tools or email safety@fluxer.app. If you know the person and can safely do so, encourage them to seek professional support or contact emergency services.

Crisis resources. Fluxer is not a substitute for professional mental health care or emergency services. If you or someone you know is struggling, here are some places to turn.

Internationally, Befrienders Worldwide (befrienders.org) operates crisis centres in over 40 countries. In Sweden, Mind (mind.se) can be reached on 90101, and BRIS (for children and young people) on 116 111. In the United States, the 988 Suicide & Crisis Lifeline is available by calling or texting 988. In the United Kingdom, Samaritans can be reached on 116 123 (free, 24/7) or at samaritans.org. In the EU, many countries offer emotional support at 116 123. Crisis Text Line is available by texting HOME to 741741 (US), 85258 (UK), 686868 (Canada), or 50808 (Ireland).

Transparency reporting

As a micro enterprise under the EU Digital Services Act, we're currently exempt from the transparency reporting obligations in Article 15. We plan to publish voluntary transparency reports as the platform grows. These will cover content moderation activities and the types of actions taken, the use of automated tools in content moderation and their accuracy, complaints received and their outcomes, and orders received from authorities and our responses. When published, reports will be available on our website and cover the preceding calendar year.

Changes to these guidelines

We may update these guidelines as new features are introduced, community norms evolve, or laws change. If we make significant changes, we'll provide at least 30 days' notice where reasonably practicable and keep a changelog for reference. When updated alongside significant changes to our Terms of Service or Privacy Policy, we'll ask you to confirm that you've reviewed and agree. If you don't agree, you can delete your account.

Contact

General questions: support@fluxer.app
Safety concerns: safety@fluxer.app
Appeals: appeals@fluxer.app

If you're unsure whether something violates these guidelines, you can ask our support or safety teams for guidance.

Law enforcement requests

We recognise that law enforcement may need information from us in some circumstances. For details, see the "Law enforcement and legal requests" section of our Privacy Policy.

Lawful process and urgent preservation requests should go to legal@fluxer.app. Requests must identify the requesting authority, legal basis, and specific data requested. We may let affected users know when the law permits it. We may reject or narrow overbroad or non-compliant requests.

Where we receive a removal order under the EU Terrorism Content Online Regulation (Regulation (EU) 2021/784), we'll remove or disable access to the content within one hour as required.

Safety and crisis resources

If you or someone else is in immediate danger, contact your local emergency services first.

For urgent safety concerns on Fluxer (threats, self-harm indications, or serious harassment), use in-app reporting tools or email safety@fluxer.app with as much detail as possible.

Fluxer can't provide medical, psychological, or legal advice. We'll do our best to respond to safety reports promptly and, where appropriate, may work with relevant services or authorities in line with applicable law.

Final thoughts

The vast majority of Fluxer users engage with the platform responsibly and never run into any issues with these guidelines. If you treat others with respect, use good judgement, and remember that there's a real person on the other side of every interaction, you're very unlikely to face enforcement action.

Fluxer is for everyone, regardless of who you are, who you love, how you identify, where you come from, or what you believe. We're committed to building a platform where every person can communicate safely, express themselves freely, and find community.

Thank you for helping us keep Fluxer safe, welcoming, and enjoyable.

Choose your language

All translations are currently LLM-generated with minimal human revision. We'd love to get real people to help us localize Fluxer into your language. To do so, email i18n@fluxer.app and we'll be happy to accept your contributions.