Community Guidelines
Clear standards for using Fluxer, protecting others, and keeping communities safe.
Last updated March 10, 2026
Effective date: 2026-03-10
What these guidelines are for
Fluxer exists to help people communicate, connect, and build communities. These guidelines describe the standards we hold ourselves and every user to. They form part of our Terms of Service, and violations may result in enforcement action as described below.
These guidelines are specific by design. Vague rules lead to inconsistent enforcement and erode trust, so where a rule could be read more than one way, the resolution belongs here rather than with individual moderators. Where context matters, the weighting is explained.
The guidelines apply to every user, in every space on Fluxer, without exception. That includes direct messages, Community channels, voice and video chats, user profiles, statuses, custom emojis, usernames, bios, and any other area where users interact or share content.
Individual Communities may adopt additional rules that are stricter than these guidelines, but never more permissive. Where there is a conflict, these guidelines and our Terms of Service take precedence.
The basic rule
Treat others with respect and consideration. Behind every username is a real person who deserves basic dignity.
If you would not want something said or done to you, do not say or do it to someone else. When in doubt, choose kindness.
Building a good community
Fluxer is for everyone, including people from marginalised and underrepresented communities who are often made to feel unwelcome elsewhere. Here is how you can help keep it that way.
Assume good intent. When something is unclear, ask for clarification before reacting. Misunderstandings happen, especially across languages and cultures.
Respect identity. Use the names, pronouns, and terms that people use for themselves. If you are unsure of someone's pronouns, it is always fine to ask respectfully or to use their username or display name.
Use content warnings and age markings. If you discuss potentially distressing, graphic, or adult topics, label them clearly and place them in age-appropriate spaces.
Set clear community rules. If you run a Community, make your rules clear, accessible, and easy to find. Align them with these guidelines and local law, and enforce them fairly and consistently.
Help new members. Help new members understand Community rules. Avoid dogpiling or retaliation, and use moderation tools instead.
Protect privacy. Share only what is needed, and be cautious about exposing personal information, yours or anyone else's.
Disagree constructively. Challenge ideas, not people. Disagreement is healthy. Personal attacks, harassment, and demeaning behaviour are not.
Report, do not retaliate. If you see behaviour that is dangerous, abusive, or clearly in breach of these guidelines, report it rather than amplify it. Retaliatory harassment is also a violation.
Prohibited conduct
The following conduct is prohibited on Fluxer. Each section explains the rule, gives examples, identifies exceptions, and describes how borderline cases are assessed.
1. Harassment and bullying
You must not harass, bully, or threaten any person or group.
1.1 Sustained or targeted harassment. Repeated hostile, degrading, or intimidating behaviour directed at a specific person or group, including following someone across Communities or channels to continue unwanted interactions.
1.2 Threats. Direct or implied threats of violence, harm, or other adverse action against any person, including conditional threats ("if you do not do X, I will do Y").
1.3 Doxxing. Sharing or threatening to share someone's personal or identifying information without their explicit consent. This covers real names, home addresses, workplaces, phone numbers, email addresses, financial information, government-issued identification, and any information that could be used to locate, contact, or identify someone against their will. It applies regardless of whether the information is technically "public". Aggregating and weaponising public information is still doxxing.
1.4 Unwanted contact. Continuing to contact, message, or interact with someone after they have clearly asked you to stop, or after you have been blocked or removed.
1.5 Sexual harassment. Unwanted sexual comments, advances, innuendo, requests for sexual content, or sexually explicit messages directed at someone who has not consented to receive them.
1.6 Pile-ons and coordinated attacks. Organising, encouraging, or participating in coordinated attacks, pile-ons, or mass harassment against an individual or group, whether on Fluxer or by directing others to harass someone on another platform.
1.7 Encouraging harm. Encouraging, inciting, or instructing others to engage in harassment or harmful behaviour towards a specific person or group.
How harassment cases are assessed. We consider frequency, duration, severity, power dynamics, whether the target asked the person to stop, whether the behaviour forms part of a pattern, and the impact on the target's ability to use the platform safely.
2. Hate speech and discrimination
You must not attack, demean, dehumanise, or incite hatred or violence against people based on protected characteristics.
Protected characteristics. The following characteristics are explicitly protected on Fluxer: race, ethnicity, colour, national origin, or ancestry; immigration or citizenship status; caste; religion, faith, or lack of religion; sex; gender, gender identity, or gender expression; sexual orientation; sex characteristics, including intersex status; disability, chronic illness, or medical condition; neurodivergence; age or generational status; pregnancy or parental status; veteran or military status; socioeconomic status or housing status; and physical appearance, including body size.
This list is intentionally broad. Other characteristics may also be protected where the context makes clear that someone is being targeted for who they are.
How hate speech is categorised. Hate speech is sorted into three severity tiers, and each tier carries a different enforcement response.
Tier 1: dehumanisation and incitement (most severe). This covers content that dehumanises people (comparing them to animals, insects, diseases, filth, subhuman entities, or objects), calls for violence, killing, or physical harm against a protected group, calls for exclusion, segregation, or denial of fundamental rights, or denies or celebrates well-documented atrocities or genocides targeting a protected group. Tier 1 content is removed immediately and typically results in account suspension or termination.
Tier 2: statements of inferiority, contempt, and stereotyping. This covers content that asserts members of a protected group are inherently inferior, less intelligent, morally deficient, or otherwise lesser; that generalises negative stereotypes as inherent traits of a group; that expresses contempt, disgust, or hatred towards a group as a whole; or that uses imagery, memes, or symbols historically associated with hatred of a protected group in a celebratory or affirming way. Tier 2 content is removed and typically results in a warning for first-time violations, escalating for repeat offences.
Tier 3: slurs, exclusion, and demeaning language. This covers content that uses slurs or derogatory terms targeting protected groups, calls for exclusion from Communities based on protected characteristics (unless the Community's purpose requires it, for example a women's support group may limit membership), or mocks, ridicules, or demeans someone specifically because of a protected characteristic. Tier 3 content is assessed contextually. Self-referential use of reclaimed language by members of the relevant group is generally permitted (see Exceptions below).
2.1 LGBTQ+ specific protections. Fluxer is and will remain a safe and affirming platform for lesbian, gay, bisexual, transgender, queer, intersex, asexual, and all other gender and sexual minority (LGBTQ+) users. The following are prohibited as forms of hate speech.
Targeted misgendering and deadnaming. Deliberately and repeatedly referring to a person by a gender, name, or pronouns that do not align with their gender identity, after being informed of, or having reasonable access to, their correct name or pronouns. This includes using someone's birth name ("deadname") against their wishes to harass, demean, or invalidate their identity. This rule targets deliberate, repeated behaviour. Honest mistakes happen; correcting yourself when informed is not a violation.
Denial of identity. Content that denies the existence or validity of transgender, nonbinary, intersex, or other gender identities, or that denies the existence or validity of sexual orientations, when directed at or about specific individuals or used to advocate for discrimination. This includes claims that being transgender is a mental illness, a delusion, or a choice that can be "cured".
Conversion therapy advocacy. Promoting, advertising, or providing instructions for conversion therapy or any programme, practice, or intervention that attempts to change a person's sexual orientation, gender identity, or gender expression. This prohibition extends to content that frames conversion practices as legitimate medical treatment, spiritual guidance, or parental responsibility.
Sexualisation and fetishisation. Reducing LGBTQ+ people to their sexual orientation or gender identity in a degrading or objectifying way, or treating LGBTQ+ identities as inherently sexual, deviant, or predatory.
Outing. Revealing or threatening to reveal someone's sexual orientation, gender identity, or intersex status without their explicit consent.
2.2 Gender-affirming healthcare discussions. Access to gender-affirming healthcare matters greatly to transgender, nonbinary, and intersex people. There is a clear line between protected discussion and prohibited content.
Allowed content includes personal experiences with gender-affirming care (hormone therapy, surgery, and other treatments), peer support, resources, and discussion of healthcare options. Sharing medical information consistent with the consensus of major medical organisations, such as the World Health Organisation, the American Medical Association, the Endocrine Society, and the World Professional Association for Transgender Health, is also allowed. So is advocating for healthcare access, insurance coverage, or policy changes, coming-out discussion and identity exploration, and discussion of detransition in a personal, supportive, or informational context.
Prohibited content includes promoting conversion therapy or practices designed to change someone's sexual orientation or gender identity, deliberately spreading medical misinformation that contradicts established scientific and medical consensus to deny transgender identities or discourage evidence-based care, using concern about healthcare as a pretext to deny, mock, or undermine transgender identities, and targeting people who have shared healthcare experiences with harassment or ridicule.
How borderline cases are assessed. Good-faith discussion of healthcare policy, medical research, individual experiences, including critical ones, and evolving scientific understanding is permitted. We distinguish genuine engagement with complex topics from bad-faith efforts to delegitimise transgender people or deny them healthcare. We look at whether the content engages with evidence in good faith, targets people or communities with hostility, uses medical or scientific framing as a pretext for harassment or identity denial, and fits the wider context.
2.3 Exceptions to hate speech rules. The following are generally not treated as violations.
Self-referential use of reclaimed language. Members of a group may use reclaimed terms (including slurs) to refer to themselves or within their community. This is assessed contextually: use of reclaimed language in a space primarily composed of that community is treated differently from use directed at strangers.
Academic, educational, and documentary content. Discussion of hate speech, discrimination, and historical atrocities in academic, educational, journalistic, or documentary contexts is permitted when the purpose is to inform, educate, analyse, or condemn. The content must not promote hatred and should include appropriate framing.
Counter-speech. Calling out, criticising, or arguing against hateful content or ideologies is protected speech. Quoting hateful content in order to condemn it is not itself a violation.
Satire and commentary. Clearly satirical content that critiques power structures, ideologies, or prejudice may be permitted. Satire that targets marginalised groups rather than critiquing prejudice against them is not protected by this exception.
3. Violence and graphic content
You must not share or promote real-world graphic depictions of violence, gore, mutilation, or animal cruelty (photographs, videos, or realistic recordings); content that promotes, encourages, glorifies, or provides instructions for self-harm, suicide, or harm to others; detailed instructions or encouragement for violence or illegal activity; or content that glorifies, celebrates, or promotes violence, violent extremism, or terrorism.
Scope. This restriction targets real-world media. Media presented as real-world, even if generated, edited, or manipulated, is treated the same as actual real-world footage. Fictional or artistic depictions of violence (drawings, animation, game content, horror) are permitted in age-gated spaces with clear content warnings, provided they are not presented as real-world footage, are not used to glorify real-world violence or target a specific person, and are not so extreme as to have no purpose other than shock.
Contextual allowances. Non-graphic discussion of difficult topics is permitted in appropriate contexts, such as news, education, and historical analysis. Such content must include clear content warnings, be restricted to age-gated spaces when likely to be distressing, and must not glorify or encourage the violence being discussed.
3a. Terrorism and violent extremism
Fluxer must not be used to promote, support, recruit for, or coordinate terrorism or violent extremism. This covers recruitment, incitement, material support, propaganda, manifestos, instructional materials, glorification of terrorist attacks or mass violence, and coordination, planning, or operational activity related to terrorism or violent extremism.
EU Terrorism Content Online Regulation. Where we receive a removal order from a competent authority under Regulation (EU) 2021/784, we will remove or disable access to the identified content within one hour of receiving the order, as required. Content falling under this section may also be reported to relevant law enforcement authorities where required or permitted by law. Removed content is preserved for six months for law enforcement purposes, as the regulation requires.
Exceptions. Legitimate news reporting, academic research, counter-extremism education, historical analysis, and artistic expression are not prohibited, provided they do not themselves glorify or promote the acts described above.
4. Sexual content and protection of minors
Zero tolerance. Child sexual exploitation is prohibited in any form. The rules in this section are among the most strictly enforced on the platform.
4.1 Child sexual abuse material (CSAM). CSAM, meaning sexual or sexually suggestive imagery depicting real children, is strictly prohibited and will be reported to law enforcement as required by law. This includes realistic AI-generated or digitally manipulated imagery that is indistinguishable from photographs of real children. Violations result in immediate and permanent account termination with no appeal, and reporting to law enforcement or relevant authorities.
4.2 Sexualisation of real minors. No user may share, distribute, request, or create sexual or sexually suggestive content depicting a real, identified minor in any medium, including text, imagery, audio, or AI-generated content. This applies regardless of the relationship between the user and the minor.
4.3 Fictional depictions of minors. Sexual or sexually suggestive content featuring fictional characters who are explicitly described as minors, or who are unambiguously depicted as prepubescent, is prohibited in all spaces, without exception. This includes drawn, animated, AI-generated, and written content where the character is clearly a child. Fictional content is assessed based on the totality of context: stated age, narrative framing, visual presentation, and the setting in which the character appears. This rule does not apply to non-sexual coming-of-age narratives, survivor stories, educational content, or literary works that depict difficult subject matter without sexualising it.
4.4 Grooming. Using the platform to build a relationship with a minor for the purpose of sexual exploitation is strictly prohibited, regardless of whether explicit content is involved. Grooming behaviours include building inappropriate emotional intimacy with a minor, attempting to isolate a minor from trusted adults or support systems, gradually introducing sexual topics or content to a minor, requesting personal information, photos, or private communication from a minor in a sexualised context, and offering gifts, money, or special treatment to a minor in exchange for personal information or intimate interaction.
4.5 Users under 18. If you are under 18, you must not engage with, share, or distribute any sexual or sexually suggestive content on the platform, including in age-gated spaces.
4.6 Adult content. Sexual and explicit content involving adults is permitted only in clearly marked 18+ spaces. Communities must apply an age restriction to the Community as a whole, to individual channels, or both. Communities that fail to enforce these requirements may be restricted or removed. Community Owners are responsible for proper age gating.
4.6a Direct messages. Direct messages between adults are not subject to the "clearly marked 18+ spaces" requirement. Adult users may share explicit content with other adult users in DMs, and can individually control what content they wish to see via their settings. However, if it becomes apparent to the participants that one party is a minor, any further sexually explicit content sent within that conversation will be treated as a violation of these guidelines. All other rules, including those on consent (Section 1.5), non-consensual intimate media (Section 4.7), and sexual exploitation (Section 4.8), still apply in full.
4.7 Non-consensual intimate media. Sharing intimate images, videos, or recordings of any person without their explicit consent is strictly prohibited. This includes "deepfakes" and AI-generated or digitally manipulated content that depicts someone in an intimate context without their permission, "revenge porn" and sexually explicit content shared to shame, coerce, or harm someone, voyeuristic content captured without the subject's knowledge or consent, and threatening to share intimate content to coerce, blackmail, or intimidate.
4.8 Sexual exploitation. Using Fluxer to facilitate sexual exploitation of any person, including sex trafficking, coerced sexual labour, or commercial sexual exploitation of minors, is strictly prohibited and will be reported to law enforcement.
5. Illegal activities
You must not use Fluxer to facilitate, promote, or engage in illegal activity. This includes malware or harmful software; fraud, scams, or deceptive practices, including phishing, impersonation, and financial scams; illegal goods, services, or controlled substances; copyright infringement or other intellectual property violations at scale or in a clearly abusive manner; hacking, unauthorised access, or cyberattacks; money laundering, terrorist financing, or similar financial crimes; evasion of lawful restrictions or sanctions; and any other activity that violates applicable law.
We may cooperate with law enforcement where required by law, or where necessary to protect individuals from serious harm.
6. Spam and platform abuse
You must not abuse or misuse the platform. This includes spam, bulk messages, unsolicited commercial content, fake accounts, impersonation, artificial Community member counts or reactions, buying or selling Fluxer accounts or Communities, abusing the free tier as unlimited cloud storage, fraudulent chargebacks or payment disputes, and automation used to evade limits, scrape or harvest data, mass-create accounts, or disrupt normal use.
Limited automation that complies with our policies and applicable law may be allowed where explicitly permitted by Fluxer. All other automated abuse is prohibited.
7. Harmful misinformation
You must not deliberately spread misinformation that is demonstrably false and likely to cause serious harm. This covers misinformation that could endanger public health or safety, interfere with democratic processes or civic participation, cause direct physical harm to individuals or communities, or damage critical infrastructure or services.
How misinformation is assessed. We look at factual accuracy, potential real-world harm, intent and context, source credibility, and whether the content was shared where others might reasonably act on it.
What this rule does not cover. Personal opinions, political commentary, good-faith debate, satire, speculation clearly labelled as such, and discussion of contested or emerging scientific topics are not prohibited. We do not police opinions or enforce a single viewpoint. This rule targets deliberate falsehoods with the potential for serious, concrete harm, and nothing about disagreement, dissent, or unpopular views.
Relationship to gender-affirming healthcare. To be explicit: sharing medical information about gender-affirming care that aligns with established medical consensus is not misinformation. Advocacy for access to gender-affirming healthcare is not misinformation. Personal accounts of gender-affirming care experiences are not misinformation. See Section 2.2 for the full policy on this topic.
8. Privacy violations
You must not violate the privacy rights of other users. This includes doxxing (addressed in Section 1.3), recording voice or video communications without consent where legally required, trying to defeat privacy settings, user blocks, or safety features, stalking or invasive monitoring connected to someone's use of Fluxer, and sharing screenshots or recordings of private conversations without consent where doing so could cause harm or was done to harass.
When in doubt about whether something violates someone's privacy, err on the side of caution and do not share it.
9. Deceptive AI-generated and manipulated content
You must not use AI-generated or digitally manipulated content to deceive, defraud, or harm others. This covers deepfakes or synthetic media depicting real people without consent, AI-generated impersonation for fraud or harassment, manipulated media presented as authentic evidence of events that did not occur, and using AI-generated content to get around other rules in these guidelines.
What is permitted. AI-generated creative, artistic, satirical, or clearly fictional content is permitted when it is not used to deceive, harass, or target people, and does not violate other rules. Where AI-generated content could reasonably be mistaken for authentic material, labelling it as AI-generated or manipulated is strongly encouraged. Unlabelled AI content that causes harm or confusion may be treated more seriously.
Reporting violations
If you see content or behaviour that appears to violate these guidelines or our Terms of Service, please report it. In-app reporting features are available throughout the platform, or email our safety team at safety@fluxer.app.
When reporting, please include relevant screenshots or message excerpts, direct links, user IDs or usernames, and a brief description of what is happening and why it concerns you.
Do not engage in vigilante responses. Do not harass, threaten, or dox someone in response to their violations. Report the issue and let our moderation team handle it. Retaliatory harassment is itself a violation of these guidelines, even if directed at someone who broke the rules first.
We may not always be able to share the outcome of a review with you, but all reports are reviewed in good faith. Reports involving imminent danger to life, child sexual exploitation, or credible threats of serious violence are treated as highest priority.
Trusted flaggers
Priority is given to reports submitted by entities designated as trusted flaggers under Article 22 of the EU Digital Services Act. If you are a designated trusted flagger, contact legal@fluxer.app.
Enforcement
What actions may be taken
When we identify violations, one or more of the following actions may follow, depending on severity, context, and risk: warnings, removal or restriction of violating content, temporary feature limits, temporary suspension, permanent account bans, limits on creating or managing Communities, deletion of Communities that repeatedly or seriously violate these guidelines, restriction of cosmetic items, premium services, or subscriptions, and reporting of illegal content or serious threats to law enforcement or relevant organisations.
How decisions are made
When deciding on enforcement, we consider severity, actual or potential harm, intent, prior violations or warnings, the risk of future harm, whether minors or vulnerable people are affected, whether law requires a particular action, and mitigating context such as a genuine misunderstanding, immediate self-correction, or cooperation.
As a general principle, enforcement starts with less severe measures for minor or first-time violations, such as warnings, content removal, or temporary restrictions. Escalation follows for repeat offences or failure to comply with warnings. Immediate and permanent action may follow for egregious violations, including child sexual exploitation or CSAM, credible threats of serious violence, large-scale or clearly malicious abuse, fraud, hacking, and terrorism or violent extremism content.
Automated and human moderation. Automated tools may flag content or behaviour for review, but enforcement decisions are made by humans, with a few limited exceptions. Automated spam and abuse defences may temporarily block actions pending review. Regional access restrictions based on IP geolocation operate automatically, as described in our Privacy Policy.
Statement of reasons
When enforcement action is taken, you will receive a clear and specific statement of reasons that includes the guideline, term, or legal ground for the action, the facts relied on, whether automated means were used, and redress options, including how to appeal and, for users in the EU, the option to refer disputes to a certified out-of-court dispute settlement body.
This applies to all enforcement actions except where providing such information would compromise an investigation, endanger safety, or conflict with legal obligations.
Appeals
If you believe an enforcement decision was incorrect, you can appeal.
How to appeal. Send an email to appeals@fluxer.app from the email address associated with your Fluxer account. State which enforcement action you are appealing, explain why you believe the decision was incorrect, incomplete, or disproportionate, and include any relevant context or evidence.
Process. Appeals can only be processed when submitted from the email associated with the affected account. One appeal per enforcement action, please; multiple submissions about the same decision will not speed things up. Submit your appeal within 60 days of receiving the enforcement notice. Temporary enforcement actions generally remain in place during review. Responses come as promptly as volume and complexity allow.
After review, the appeal decision is generally final. Past decisions may be revisited if new, material information comes to light, or if we update our policies in relevant ways. If a complaint shows that content was not illegal and did not violate these guidelines or our terms, the decision is reversed without undue delay.
Out-of-court dispute settlement (EU)
If you are in the European Union and are not satisfied with the outcome of our appeals process, you can refer the dispute to a certified out-of-court dispute settlement body under Article 21 of the EU Digital Services Act. A list of certified bodies is available through the Digital Services Coordinator in your Member State. We will engage in good faith with any certified body you select.
Special considerations
For younger users
Users must meet the Minimum Age to use Fluxer, as described in our Terms of Service and Privacy Policy. The general figure is 13, though some countries set it higher.
Stricter safety features may be enabled by default for users identified as under 18, including tighter privacy defaults and restricted access to certain features. Some content or Communities may be restricted based on age. Communities focused on dating or romantic relationships between minors, or that sexualise minors in any way, are strictly prohibited. The safety of underage users is a priority.
If you are under 18, be particularly careful about sharing personal information, and do not meet people from Fluxer in person without involving a trusted adult.
For Community Owners
If you own, create, or administer a Community, you are responsible for the content and behaviour within it, including user-generated content and moderation practices. Use the available tools to keep your Community safe, including moderation roles, content controls, and age gates. Set clear, visible rules that align with these guidelines, and enforce them fairly. You can set stricter rules, but never more permissive ones.
Failure to moderate or address serious, repeated violations can result in restrictions on your Community, removal of your Community, or enforcement action against your account. If you are unsure how to handle a safety issue, report it to us or contact safety@fluxer.app.
For parents and guardians
Safety resources and guidance on our website help parents and guardians understand Fluxer and support young users. If you have concerns about your teenager's account, contact our support team. We may need to verify your relationship before discussing a specific account. If you believe a child is in immediate danger, contact local emergency services first, then let us know.
Self-harm and crisis content
The wellbeing of every person on Fluxer matters. Our approach to self-harm content is built around compassion and support rather than punishment.
What is prohibited. You must not glorify, encourage, promote, or provide specific instructions or methods for self-harm, suicide, or eating disorders. You must not pressure or dare anyone to harm themselves, create or participate in content that gamifies or challenges self-harm, or share graphic imagery of self-harm.
What is allowed. Supportive, empathetic conversations about mental health are welcome. That includes personal experiences shared in a supportive context, emotional support, recovery and coping strategies, and information about professional resources.
How the rule is applied. Users are not punished for saying they are struggling. If content suggests someone may be at imminent risk, the priority is connecting them with support. Interstitial screens may link to crisis resources, content warnings may be placed on distressing messages, and in urgent cases, steps may be taken to help keep the person safe.
If you see someone in crisis, report the content via in-app tools or email safety@fluxer.app. If you know the person and can safely do so, encourage them to seek professional support or contact emergency services.
Crisis resources. Fluxer is not a substitute for professional mental health care or emergency services. If you or someone you know is struggling, here are some places to turn.
Internationally, Befrienders Worldwide (befrienders.org) operates crisis centres in over 40 countries. In Sweden, Mind (mind.se) can be reached on 90101, and BRIS (for children and young people) on 116 111. In the United States, the 988 Suicide & Crisis Lifeline is available by calling or texting 988. In the United Kingdom, Samaritans can be reached on 116 123 (free, 24/7) or at samaritans.org. In the EU, many countries offer emotional support at 116 123. Crisis Text Line is available by texting HOME to 741741 (US), 85258 (UK), 686868 (Canada), or 50808 (Ireland).
Transparency reporting
As a micro enterprise under the EU Digital Services Act, we are currently exempt from the transparency reporting obligations in Article 15. Voluntary transparency reports are planned as the platform grows. These will cover content moderation activities, action types, automated moderation tools and their accuracy, complaints and outcomes, orders from authorities, and our responses. When published, reports will be available on our website and cover the preceding calendar year.
Changes to these guidelines
These guidelines may be updated as new features are introduced, community norms evolve, or laws change. Material changes come with at least 30 days' notice where reasonably practicable, and a changelog is maintained for reference. When updated alongside changes to our Terms of Service or Privacy Policy, you will be asked to confirm that you have reviewed and agreed. If you do not agree, you can delete your account.
Contact
General questions: support@fluxer.app
Safety concerns: safety@fluxer.app
Appeals: appeals@fluxer.app
If you are unsure whether something violates these guidelines, our support or safety teams can help.
Law enforcement requests
Law enforcement may need information from us in some circumstances. For details, see the "Law enforcement and legal requests" section of our Privacy Policy.
Lawful process and urgent preservation requests should go to legal@fluxer.app. Requests must identify the requesting authority, the legal basis, and the specific data requested. Affected users may be notified where the law permits. Overbroad or non-compliant requests may be rejected or narrowed.
Where a removal order under the EU Terrorism Content Online Regulation (Regulation (EU) 2021/784) is received, we will remove or disable access to the content within one hour as required.
Safety and crisis resources
If you or someone else is in immediate danger, contact your local emergency services first.
For urgent safety concerns on Fluxer, such as threats, self-harm indications, or serious harassment, use in-app reporting tools or email safety@fluxer.app with as much detail as possible.
Fluxer cannot provide medical, psychological, or legal advice. We respond to safety reports as promptly as possible and, where appropriate, work with relevant services or authorities in line with applicable law.
A final note
Most Fluxer users use the platform responsibly and never run into issues with these guidelines. If you treat others with respect, use good judgement, and remember there is a real person on the other side of every interaction, you are very unlikely to face enforcement action.
Fluxer is for everyone, regardless of who you are, who you love, how you identify, where you come from, or what you believe. We want it to be a place where everyone can communicate safely, speak freely, and find community.
Thank you for helping keep Fluxer safe and welcoming.