NEAR House of Stake — Code of Conduct (CoC) draft for community review

NEAR House of Stake — Code of Conduct (CoC) draft for community review

Hello NEAR Community,

Here’s the structure of what you’ll find in this post and how to use it. We’re sharing the House of Stake (HoS) Code of Conduct draft (CoC) for rigorous review and improvement by the NEAR community. Please note that the overall Co-Creation Process we are using is described here (key to understand the way we are facilitating a legitimate and community-owned process).

Meanwhile, this document is divided in the following sections. Each has a clear purpose so you can jump to what you need—or read end-to-end for full context.

I) Context — Why HoS needs a CoC now, how it fits NEAR’s governance journey, and the scope it covers (on-chain, off-chain, and events).

II) House of Stake Code of Conduct v0.1.0 — The operative, legal-like numbered clauses that define behaviors, processes, and enforcement.

III) Ratification — How community sentiment translates into on-chain confirmation so the CoC becomes effective.

IV) Operationalizing — The immediate next steps after ratification (council setup, SOPs, guidance materials, transparency cadence).

V) Methodology — A succinct recap of the research, synthesis, and prior drafts that led to this version.

VI) Quick Start — Exactly how to engage right now: where to comment, how to propose edits, and how to participate in the poll.


I) Context: Why a CoC for House of Stake and how Hack Humanity fits in

The House of Stake (HoS) is the next governance layer for NEAR, designed to be simple, evolvable, and accountable. While stake-weighted decision-making provides legitimacy, a clear CoC is essential to complement it. It sets shared norms, provides due process, and ensures transparent enforcement for both on-chain and off-chain activities.

This CoC builds on existing norms and formalizes them into an enforceable process. Hack Humanity’s role in this is to help establish and operationalise safe, professional, and inclusive standards for community events and interfaces, connecting those norms to our ongoing governance conduct.


II) House of Stake Code of Conduct v0.1.0

Version: v0.1.0
Audience: NEAR Community
Purpose: Gather community-wide feedback as part of Co-Creation Cycle 1

1. Our Pledge

1.1 We, as members, contributors, delegates, moderators, stewards, and other participants of the House of Stake (HoS), pledge to create a governance environment where participation is safe, inclusive, and transparent.

1.2 Commitments:

1.2.1 Act with professionalism, integrity, and respect in all spaces.
1.2.2 Align behavior with NEAR’s long-term interests and ecosystem health.
1.2.3 Protect privacy, safety, and data integrity.
1.2.4 Use technology, including AI, in an ethical, transparent, and accountable way.

1.3 Applicability:

This pledge applies to on-chain decisions, off-chain forums, events, and public representation of HoS.


2. Purpose & Scope

2.1 Purpose:

Ensure a healthy culture of productive participation in achieving House of Stake’s Mission.

2.2 Scope of Application:

2.2.1 On-chain: including but not limited to proposal submission, delegate voting, treasury allocation, multisig participation.
2.2.2 Off-chain: including but not limited to governance forums, Discord, Telegram, GitHub, social media, community calls.
2.2.3 Community & Events: including but not limited to workshops, hackathons, AMAs, partnerships, and DAO-to-DAO representation.

2.3 Definitions:

2.3.1 Token-holders: participants with stake or voting rights.
2.3.2 Delegates: participants acting with proxied voting authority.
2.3.3 Moderators: individuals tasked with managing discussion, intake, assessment and enforcement.
2.3.4 Stewards: elected or appointed roles in HoS committees, councils or working groups (including the CoC Appeals Panel).
2.3.5 Contributors: developers, writers, organizers, and others engaged in HoS activities.

2.4 Appointment of Stewards

Stewards are currently appointed by NEAR Foundation, until which time that authority can be granted to House of Stake to appoint these.


3. Values & Standards

3.1 Agreed Behaviors

3.1.1 Act in good faith and perform due diligence before voting or advising.
3.1.2 Make your best effort to resolve disputes or issues privately or with a moderator instead of escalating to public channels.
3.1.3 Disclose conflicts of interest, according to the Conflict of Interest Policy.
3.1.4 Provide clear rationales for governance actions.
3.1.5 Communicate with respect, inclusivity, and professionalism.
3.1.6 Protect the privacy, dignity, and safety of community members.
3.1.7 Collaborate transparently; document decisions; support iterative improvement.

3.2 Prohibited Behaviors

3.2.1 Harassment, bullying, stalking, or identity-based abuse.
3.2.2 Plagiarism, falsification, or misrepresentation of work.
3.2.3 Vote-buying, bribery, or covert influence.
3.2.4 Failure to disclose conflicts of interest.
3.2.5 Doxxing, privacy violations, or unauthorized data exposure.
3.2.6 Spamming, shilling, brigading, disinformation, or sabotage.
3.2.7 Making unsubstantiated public accusations against any HoS participant, contributor, or program — including on external platforms (social media, podcasts, media, or other public forums) — without first seeking clarification or following reporting channels.

3.3 Good Practice Example

A delegate suspects irregularities in a funding decision. They first request clarification privately from the relevant working group, then file a report through the official HoS intake form with supporting evidence.

3.4 Bad Practice Example

A contributor tweets that a House of Stake program is “stealing funds” without evidence, instead of using reporting channels. The claim is later deleted, but reputational harm has already occurred.


4. Confidentiality & Financial Independence

4.1 Agreed Behaviors

4.1.1 Respect confidentiality and uphold privacy in all processes.
4.1.2 Maintain independence in decision-making; proactively disclose financial or personal interests when relevant.

4.2 Prohibited Behaviors

4.2.1 Disclosing personal information without explicit consent. This includes contact details, physical location, financial data, wallet addresses, or any information that could enable identification, coercion, or reputational harm.
4.2.1 Accepting undisclosed compensation or benefits in relation to governance actions.

4.3 Good Practice Example:

Challenging the value for money of a particular piece of work, based on substantiated evidence.

4.4 Bad Practice Example:

A member speculates publicly about another’s earnings to undermine their credibility.


5. Work Quality, Pace, and Feedback

5.1 Agreed Behaviors

5.1.1 Encourage timely contributions while respecting diverse work rhythms.
5.1.2 Provide feedback that is constructive, specific, balanced, and respectful.
5.1.3 Recognize and credit the efforts of others.
5.1.4 Foster a safe, professional, and supportive environment.
5.1.5 Assess ideas, work and deliverables based on the arguments and evidence that support them, not personal attacks targeting the character, identity, or unrelated attributes of a member.
5.1.6 Provide appropriate feedback based on the stage a piece of work is at.
5.1.7 Give people a fair chance, space and time to do the work and do it well.

5.2 Prohibited Behaviors

5.2.1 Dismissing contributions with superficial or derogatory remarks.
5.2.2 Making baseless criticism without representative evidence.
5.2.3 Undue or hostile pressure to conform to arbitrary work pace or rhythms. Constructive encouragement is acceptable.
5.2.4 Generalized criticism without constructive intent.
5.2.5 Any pressure, speculation, or unconstructive criticism that harms collaboration.
5.2.6 Avoid toxic or hostile criticism disguised as urgency.

5.3 Good Practice Example:

A reviewer highlights strengths and specific improvements with constructive feedback and actionable suggestions.

5.4 Bad Practice Example:

A member mocks another as “lazy” or “too slow” without understanding the size, complexity, nature of the work type, it’s dependencies, review processes, etc. that a piece of work may need to go through to be done.


6. Reporting & Intake

6.1 Anyone who experiences or witnesses a potential violation is encouraged to report it as described below.
6.2 Moderators will also pro-actively monitor for violations and process those on behalf of the community.

6.3 Reporting Channels (to be set up)

6.3.1 Confidential Code of Conduct complaint form with option to submit anonymously (official HoS portal).
6.3.2 Email: coc@houseofstake.org (alternative submission if needed)
6.3.3 Direct contact with the current Community & Moderation team at events or in community channels or calls.

6.4 Intake & Triage

6.4.1 Acknowledgement of received complaint by the Community and Moderation team, this includes explaining what action they will take.
6.4.2 Urgency assessment within 48 hours to address immediate risks to safety or governance integrity.
6.4.3 Confidential handling; reporter identities protected where possible.
6.4.4 Detect abuse of process (e.g., repeated malicious or false reports) is a violation.

6.5 Good Practice Example:

A member reports a prohibited behavior with timestamps and supporting evidence.

6.6 Bad Practice Example:

A member files repeated false reports to harass another participant.


7. Moderation Standards

7.1 Impartiality: moderators must have no conflicts of interest.
7.2 Cultural and linguistic competence: include moderators who understand the parties’ context.
7.3 Documentation: maintain secure records, a clear evidence trail, and access controls.
7.4 Timeliness: target resolution within 14 days; document and communicate extensions.
7.5 AI oversight: AI tools may assist with triage or pattern detection; humans make final decisions.
7.6 Evidence standards: use verifiable records (e.g., logs, messages, transactions) and note limitations.

7.7 Good Practice Example:

Assign moderators from outside the immediate dispute to ensure impartiality.

7.8 Bad Practice Example:

Allowing a conflicted delegate to oversee a case involving their own committee.


8. Enforcement & Remedies

8.1 Principles: proportionality, predictability, and restoration where feasible.

8.2 Feedback

8.2.1 Observation: first, minor or potential violation.
8.2.2 Consequence: private or public feedback, at Moderator’s discretion.
8.2.3 Repair: acknowledgement, clarification, improvement in behaviour.

8.3 Warning

8.3.1 Observation: feedback ignored or serious violation
8.3.2 Consequence: private or public written notice with requested changes.
8.3.3 Repair: apology, acknowledgement, or clarification.

8.4 Temporary Restriction

8.4.1 Observation: repeated or significant violation
8.4.2 Consequence: time-bound restriction or suspension from channels or roles.
8.4.3 Repair: reflection, mediation and a plan for corrective steps with conditions for return defined.

8.5 Permanent Ban

8.5.1 Observation: severe violation undermining safety or governance integrity or legitimacy
8.5.2 Consequence: removal from all governance spaces (on-chain and off-chain) to the greatest extent possible.
8.5.3 Repair: not applicable; reserved for irreparable breaches of trust.

8.6 Proportionality Factors

Moderators will exercise judgement on the level of remedies based on intent, impact, prior history, cooperation, and community safety.


9. Appeals Process

9.1 Appeals Panel: at least 3 independent members, rotating annually; no conflicts of interest.
9.2 Criteria: temporary restrictions and permanent bans can be appealed based upon new evidence, a claim of misinterpreted evidence, procedural error or disproportionate sanctions.
9.3 Timeframe: submit within 14 days; decision within 30 days.
9.4 Submission: encrypted form or direct email to the Panel’s published contact.
9.5 Finality: Panel decisions are binding, subject to community ratification in exceptional cases.

9.6 Good Practice Example:

A sanctioned member submits new logs that change the assessment; sanction reduced.

9.7 Bad Practice Example:

Multiple frivolous appeals filed to delay enforcement.


10. Risk Disclosures & Limitations

10.1 Enforcement capacity depends on moderator resources and jurisdictional constraints.
10.2 On-chain actions may be irreversible; remedies cannot fully counteract immutability.
10.3 This CoC complements applicable law; it does not replace legal rights or obligations.
10.4 Jurisdictional differences may require tailored measures while upholding core principles.


11. Transparency & Governance Oversight

11.1 All reports, evidence, decisions and feedback and enforcement actions are logged in an auditable but privacy-preserving way.
11.2 Annual reports summarize cases, categories, timelines, outcomes, and reforms (respecting privacy where required).
11.3 Committees overseeing this CoC maintain a public change log and explain major policy updates.
11.4 Moderation team disclose their affliations, incentives and responsibilies to reduce conflicts of interest.


12. Contact & Amendments

12.1 Contact: info@houseofstake.org.
12.2 Amendments: updates follow a public notice and versioning process with a “Last Updated” date.
12.3 Effective Date: this CoC takes effect upon community ratification and remains in force until amended.

End of the CoC policy text.


Jump now to VI) Quick Start — Exactly how to engage right now: where to comment, how to propose edits, and how to participate in the poll.

Or read on for more about the methodology being applied…


III) Cocreation Process - House of Stake

Legitimacy in decentralized ecosystems is earned through openness, inclusivity, and shared ownership—but we must balance that against limited stakeholder attention. Our approach is an expanding-circles model that starts lean, then adds more representation and methodological rigor only if broad agreement is missing.

Cycle 1:

  • Ingest historical materials, transcripts, sticky notes, forum drafts.
  • Convene a smallest viable group to produce a first draft.
  • Purpose: get a concrete, discussable artifact fast—with minimal cost.
  • Share draft in easiest way to get broad feedback.
  • Legitimacy test: stakeholder feedback & temperature check. If strong support → ratify/use; if insufficient support → expand.

Cycle 2:

  • Iterate on the CoC policy text to address cycle 1 feedback
  • Expand the drafting group to include broader representation.
  • Apply and showcase a deeper, reproducible methodology.
  • Share draft in more structured way i.e. in Github repo.
  • Re-share for full community feedback. If strong support → ratify/use; if gaps/divisions persist → expand again.

Cycle 3 (if necessary):

  • Structure the drafting body to be legitimately representative of all key stakeholder groups.
  • Use defensible and reproducible methods end-to-end.
  • Re-share and proceed to ratification once thresholds are met.

The principle:

Only invest more work (and cost) if legitimacy demands it. This balances efficiency (don’t over-invest when consensus is present) with legitimacy (scale participation when it isn’t).


Legitimacy & Ratification — GitHub-Native Process

This section explains how to propose edits to the CoC using GitHub and how it will be ratified. It’s intentionally lightweight at first and scales only if legitimacy tests aren’t met.

Draft process for Cycle 2

1) Canonical Source & Versioning

  • Canonical file: The CoC [ADD LINK WHEN POSTED IN GITHUB] lives in this repository as a Markdown file; the repo is the source of truth.
  • Branches & tags
    • Working draft branch: coc/v1.1-draft
    • Release candidates (RC): tags v1.1-rc.1, v1.1-rc.2, …
    • Effective release: tag v1.1 and branch coc/v1.1
  • Change log: CHANGELOG.md summarizes every merged change (date, author, section, rationale, links to PR/issue).

2) How to Propose Edits (GitHub)

Use one topic per PR and reference numbered clauses (e.g., 5.2.1 Harassment).

A. Minor/Patch edits (typos, formatting, small clarifications)

  1. Fork or create a branch from coc/v1.1-draft.
  2. Make your change(s) directly in the CoC file.
  3. Open a PR to coc/v1.1-draft with title:
    CoC: <section-number> <short description> [minor]
  4. Fill the PR checklist (below).

B. Major edits (scope, sanctions ladder, appeals, COI definitions)

  1. Open an Issue first describing the problem and proposed solution.
  2. After community discussion, submit a PR linked to that Issue with title:
    CoC: <section-number> <short description> [major]
  3. Expect focused review (see triage & SLAs).

PR Checklist (copy-paste into your PR description)

  • Clause(s) touched (e.g., 5.2.1, 7.3)
  • Change type: Minor / Major
  • Proposed text (exact redline or replacement)
  • Rationale (1–3 sentences; risks if not adopted)
  • Evidence (links to precedent, policy, or data)
  • Conflicts of interest to disclose (if any)
  • I agree to keep discussion constructive and in-scope

Style guardrails

  • Keep legal-like numbering intact (1, 1.1, 1.1.1).
  • Prefer concrete, enforceable language over vague adjectives.

3) Triage, Review & Release Candidates

  • Acknowledgment SLA: within 76 hours on Issues/PRs.
  • Labeling: major-change, minor-edit, clarification, policy-risk, needs-discussion, ready-to-merge.
  • Weekly merge window: maintainers batch-merge accepted minor edits; major edits require at least 2 reviewer approvals (one delegate + one maintainer).
  • Release candidates: Once a meaningful set of changes is accepted, maintainers cut a tag v1.1-rc.X and post a short summary. If there are no blocking objections after the discussion window, the RC moves forward to ratification.

4) Legitimacy Tests (before ratification)

Advance to ratification when ALL are true:

  1. Sentiment: clear support in the forum thread (using a poll).
  2. Representation floor: 5 comments from at least two stakeholder groups (e.g., delegates, moderators/builders/community).
  3. No unresolved material objection without a written maintainer response and next step.

If these are not met, expand outreach, run a focused workshop, or iterate another RC (rc.+1) before re-testing.


5) Ratification Mechanics — Community sentiment → On-chain action

Community sentiment (off-chain poll)

  • Hosted in the forum post.
  • Participation floor: N ≥ 5 unique forum accounts or prior-defined active contributors.
  • Pass threshold: ≥ 60% “Yes”.

On-chain confirmation (or signed delegate statement)

  • HoS-based governance

Effectivity

  • Upon onchain confirmation, the CoC becomes v1.1 (Effective). The effective text is tagged v1.1 and its commit hash is recorded in the forum thread.

6) Transparency & Records

  • Keep all Issues/PRs public by default.

  • Publish the Decision Record in the forum: what changed since the previous draft, how material objections were handled, poll results, and the commit hash of v1.1.


IV) Operationalizing the CoC

Following ratification, we will finalize the operational specifics over the next few days. In the meantime, the structure below sets out exactly what we will stand up and document so the Code of Conduct can function from day one.

Draft process post-Ratification

1) CoC Council — Structure to be finalized

We will publish concise definitions and templates covering:

  • Size & composition: number of seats, diversity goals, and stakeholder coverage.
  • Nomination format: how to nominate (fields required), where to submit, and the review window.
  • Eligibility & conflicts: baseline qualifications, disclosure requirements, and recusal rules.
  • Selection: decision method, participation thresholds, and tie-break procedures.
  • Term & renewal: term length, renewals, mid-term vacancy fills, and removal for cause.
  • Operating rules: quorum, decision-making method, emergency actions, record-keeping.
  • Roles: chair/convener, case manager(s), clerk/records, alternates for recusals.

2) Education & User Guidance — “How to Report / How to Appeal”

We will publish a short, plain-language guide (posted in the forum and linked from the repo) that includes:

  • Where to report: reporting channels and when to use each.
  • What to expect: acknowledgment timelines, privacy/confidentiality, and typical resolution windows.
  • Appeal steps: who reviews appeals, grounds for appeal, timelines, and re-entry conditions.
  • Templates: copy-paste forms for reports and appeals.

3) Timeline

  • Week 1: Once ratification has happened, open nominations for the CoC Council.

  • Weeks 2–4: Nominate & confirm Council; run orientation for Council, moderators, and delegates.

  • Weeks 5–6: Begin handling initial cases under SLAs; publish the first monthly transparency note.

  • Week 8: Post a brief “lessons learned” update and propose any minor policy patches via PR.


2 Likes

V) Methodology: Research and documentation

We’ve conducted extensive research, reviewing CoCs from other DAOs, open-source projects, and communities. We’ve also mapped these against institutional governance standards and distilled the most effective practices for HoS.

Our work included:

  • A best-practices brief that outlines key principles like clarity, inclusivity, transparency, and participation.
  • A comparative matrix that evaluated CoCs from communities like Uniswap, Arbitrum, Optimism, and Django.
  • An evidence-to-policy mapping that shows how each section of our CoC is based on this research.

We also looked at norms that are already working, such as the NEAR forum rules, the Uniswap “civil forum” ethos, and the clear enforcement guidelines found in the Contributor Covenant.

Methodology: How we synthesized this information

We used a dual-anchor approach to translate our research into enforceable policy.

  • Institutional anchors: We referenced guidelines from organizations like UNESCO and the World Bank to ensure our CoC upholds standards of procedural justice, accountability, and transparency.
  • Practice/literature anchors: We drew from sources on topics like AI governance to ensure the CoC enables accountable self-governance and can integrate with future tooling.

This methodology helped us translate abstract principles into concrete sections covering everything from definitions and scope to investigations and appeals.

The research is composed of the following elements:

  1. Scoping statement
  2. Best Practices & Risks for CoC
  3. Comparative matrix
  4. Insights from Comparative Analysis of CoCs
  5. Evidence‑to‑Policy Mapping Table
  6. Design rationale
  7. Quality Check Report & Implementation Checklist
  8. Methodology and Limitations

If you want to go through all the research backlog unfold the Methodology sections below.

1. Scoping Statement

Click to Expand: Scoping Statement

The House of Stake (HoS) is a new governance framework designed to empower NEAR token holders through a transparent, efficient, stake-weighted decision-making system. It replaces the former NEAR Digital Collective (NDC), which has been retired as a past experiment. HoS operates entirely online, built on the NEAR Protocol, and introduces a vote-escrowed token (veNEAR) that rewards long-term commitment and alignment. Governance is conducted both on-chain—via stake-weighted proposals, screening committees and on-chain voting—and off-chain—in forums, chat channels, virtual meetings and project repositories. Key components include a pre-screening committee, a delegate system with aligned incentives, and structured funding sourced from 0.5 % protocol inflation.

Stakeholders

  • Token-holders (veNEAR stakers): Individuals or organisations that lock NEAR into vote-escrow to obtain voting power proportional to their commitment.
  • Delegates and screening committee members: Trusted stakeholders who pre-screen proposals and vote on behalf of others, selected based on competence and alignment.
  • Contributors and working groups: Developers, designers, researchers and community organisers who build projects, provide services, or propose initiatives under HoS.
  • Moderators and stewards: Individuals empowered to facilitate discussions, curate content and enforce community norms across HoS communication channels.
  • Wider community: Users of NEAR-based applications, partners, other DAOs and the general public interacting with HoS spaces.

Scope of the Code of Conduct

The Code of Conduct (CoC) applies to all House of Stake governance and community spaces, including but not limited to:

  • On-chain governance: submission and discussion of proposals, screening committee deliberations, delegate voting, treasury allocations, and any stake-weighted decision conducted through veNEAR (House of Stake Call for Delegates, 2024).
  • Off-chain channels: discussion forums, governance forums, Discord/Telegram/Slack channels, video calls, hackathons, working-group platforms, GitHub/GitLab repositories, and social-media spaces under the stewardship of HoS (NEAR Community Guidelines, 2023).
  • Events and interactions: virtual meet-ups, livestreams, educational workshops and any HoS-branded or NEAR-sponsored spaces (Hack the North, n.d.).
  • External representation: interactions where participants represent HoS to third parties (e.g., other DAOs, media or regulators) (UNESCO, 2023).

The CoC covers behaviour wherever HoS governance or community interactions occur, regardless of medium, and applies equally to public and private channels. Behaviour outside official spaces may be subject to the CoC if it materially affects the safety or integrity of HoS spaces (Global Fund, 2021; UNESCO, 2023).


Underlying Assumptions

  • HoS values alignment between governance decisions and long-term ecosystem sustainability; those with more stake and longer commitment have more influence.
  • Governance must be transparent and accountable: all proposals, votes and delegate activities are recorded on-chain and open to review.
  • Decision-making should be efficient without sacrificing decentralisation: screening committees and delegate systems reduce noise while preserving open participation.
  • Participation should be inclusive and responsive: channels for proposing, discussing and challenging decisions must be accessible to diverse stakeholders, with emphasis on responsiveness to feedback.
  • HoS is subject to the laws of relevant jurisdictions and international human-rights standards; it must respect privacy, data-protection and anti-discrimination laws.
  • The community acknowledges power imbalances (e.g., between high veNEAR stakes and newcomers) and seeks to mitigate them through fair processes, conflict-of-interest disclosures and inclusive design.

This scoping statement clarifies who the CoC covers, the spaces it governs and the assumptions underpinning HoS. It also situates the CoC within the broader vision of HoS: a sustainable, decentralised and user-owned governance system that balances economic alignment, open participation and pragmatic execution.

Further sections will translate these assumptions into concrete governance principles and enforcement mechanisms.


Research Framework Inspired by Governable Spaces and Governance Literature

Nathan Schneider’s Governable Spaces proposes designing online communities in ways that actively enable self-governance rather than replicating “implicit feudalism” (Schneider, 2023). While the book remains a conceptual foundation, the House of Stake extends these ideas by introducing stake-weighted voting and long-term alignment mechanisms. The framework also draws on international governance guidelines—particularly UNESCO’s Guidelines for the Governance of Digital Platforms (UNESCO, 2023)—as well as emerging literature on responsible AI and digital ethics (Gabriel, 2020; Centre for International Governance Innovation, 2021; Transcend, 2024).

The framework summarises key dimensions of democratic and stake-based governance relevant to DAOs and digital communities. Each dimension includes a short description and criteria that can be used to evaluate existing Codes of Conduct and to design a new one for the House of Stake. The accompanying rubric (Table 1) converts these dimensions into measurable indicators aligned with HoS values: Alignment, Transparency, Accountability, Efficiency, Inclusivity, Sustainability and Responsiveness.


Governance Dimensions

1. Legitimacy & Consent

A governable space must derive authority from its participants. Decision-making processes should be co-created, documented and subject to collective consent rather than imposed unilaterally. Modular governance tools should make it possible for members to choose the rules that govern them. UNESCO’s guidelines advise that governance processes should be “open and accessible to all stakeholders” and that checks and balances should be institutionalised (UNESCO, 2023).

Key ideas: participatory design; clear documentation of rights and duties; mechanisms for community ratification of rules; informed consent.

2. Modularity, Expressiveness, Portability & Interoperability

Governance systems should be modular, allowing communities to assemble and adapt governance processes to their needs. This flexibility encourages experimentation and diversity while maintaining coherence. The CoC should allow modules (e.g., decision mechanisms, enforcement processes) to be swapped or updated without rewriting the entire document.

Key ideas: modular governance plugins; ability to import proven processes; cross-DAO interoperability; explicit documentation of each module’s scope and authority.

3. Subsidiarity & Local Control

Decision-making should occur at the most local level possible. Participant-centred systems allow communities to handle harm and conflict contextually, rather than through global, one-size-fits-all policies. UNESCO’s guidelines support this approach by emphasising multi-stakeholder participation (UNESCO, 2023).

Key ideas: decentralised decision-making; local autonomy; context-sensitive moderation; avoidance of excessive centralisation or over-automated enforcement.

4. Representation & Inclusivity

Democratic governance must ensure that power is not monopolised by majorities or large token-holders. Mechanisms should amplify under-represented groups and reduce participation gaps. UNESCO highlights the need to empower users, promote cultural diversity and reduce participation gaps (UNESCO, 2023).

Key ideas: proportional or weighted voting mechanisms; anti-bias safeguards; outreach to under-represented communities; clear membership definitions; translation and accessibility support.

5. Accountability & Feedback

Decision-makers and enforcers are answerable to those affected by their decisions. Processes must include oversight, transparency, and regular review (Transparency International, 2021; Global Fund, 2021).

Key ideas: disclosure of enforcement actions; independent oversight; feedback loops for community evaluation of moderators and leaders; conflict-of-interest policies.

6. Transparency & Information Accessibility

Participants must understand how rules are made, how decisions are reached and what data are collected. Transparent processes build trust and support informed participation (Brookings, 2022).

Key ideas: open publication of governance documents; clear explanation of algorithms and enforcement policies; accessible archives of proposals and decisions; privacy notices explaining data use.

7. Enforcement & Proportionality

Enforcement mechanisms should deter unacceptable behaviour without reproducing authoritarian structures. Enforcement should be proportional, context-aware, and explained clearly.

Key ideas: graduated sanctions; human-in-the-loop moderation; restorative justice options; safeguards against enforcement abuse.

8. Appeals & Procedural Justice

A fair system provides avenues to challenge or appeal decisions. UNESCO notes that governance processes should include checks and balances, which implies the ability to review and correct mistakes (UNESCO, 2023).

Key ideas: independent review bodies; transparent timelines; right to a hearing; clear criteria for overturning decisions; anti-retaliation protections.

9. Restorative Options & Power Imbalances

Communities should adopt restorative justice practices to repair harm and rebuild trust, while mitigating structural power imbalances.

Key ideas: restorative or transformative justice pathways; mediation services; conflict-of-interest disclosure; acknowledgement of power differentials; anti-retaliation clauses.

10. Education, Accessibility & Continuous Improvement

Governance is an evolving practice requiring ongoing education and refinement. UNESCO emphasises that platforms should equip participants with tools for informed engagement (UNESCO, 2023).

Key ideas: onboarding guides; training sessions; documentation updates; scheduled reviews; surveys; versioning and changelog.


Table 1 – Framework Rubric for Evaluating Codes of Conduct

Dimension Criteria/Indicators Purpose
Legitimacy & Consent Participatory drafting, community ratification, explicit consent Ensures rules derive authority from participants rather than unilateral imposition
Modularity & Flexibility Composable modules, clear interfaces, ability to modify modules Encourages experimentation and adaptation to diverse needs
Subsidiarity & Local Control Delegation to smallest unit, local charters, context-sensitive moderation Prevents over-centralisation and supports local autonomy
Representation & Inclusivity Proportional representation, accessibility support, outreach to marginalised groups Guards against dominance by large token-holders and promotes diversity
Accountability & Feedback Public reporting, independent oversight, regular review Provides mechanisms to hold decision-makers answerable
Transparency & Accessibility Publication of documents and decisions, clear data practices, accessible archives Builds trust and supports informed participation
Enforcement & Proportionality Graduated sanctions, human review of automated decisions, restorative options Ensures fair, proportional enforcement
Appeals & Procedural Justice Defined timelines, independent review body, anti-retaliation protections Provides fair avenues to challenge and correct mistakes
Restorative Options & Power Imbalances Mediation, conflict-of-interest policies, anti-abuse clauses Encourages healing and addresses structural inequalities
Education & Continuous Improvement Onboarding, training, scheduled reviews, changelog/versioning Supports informed participation and ongoing refinement
References for Scoping Statement

References

Brookings. (2022). Transparency is the best first step towards better digital governance. https://www.brookings.edu/articles/transparency-is-the-best-first-step-towards-better-digital-governance/
Centre for International Governance Innovation. (2021). Algorithms and the control of speech: How platform governance is failing under the weight of AI. https://www.cigionline.org/articles/algorithmic-content-moderation-brings-new-opportunities-and-risks/
Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2
Global Fund. (2021). Code of Conduct for Governance Officials. https://www.theglobalfund.org/media/4293/core_codeofethicalconductforgovernanceofficials_policy_en.pdf
OECD. (2014). Recommendation of the Council on Digital Government Strategies. OECD Publishing. https://www.oecd.org/gov/digital-government/Recommendation-digital-government-strategies.pdf
Schneider, N. (2023). Governable spaces: Democratic design for online communities. University of California Press. https://www.ucpress.edu/book/9780520393950/governable-spaces
Tan, J., Angeris, G., Chitra, T., & Karger, D. (2024). Constitutions of Web3: A comparative study of DAO governance documents. arXiv. https://arxiv.org/pdf/2403.00081v1
Transcend. (2024). Key principles for ethical AI development. https://transcend.io/blog/ai-ethics
Transparency International. (2021). Our Principles. https://www.transparency.org/en/the-organisation/mission-vision-values
UNESCO. (2023). Guidelines for the governance of digital platforms. https://unesdoc.unesco.org/ark:/48223/pf0000387339

2. Best practices and risks for Codes of Conduct in Digital Communities and DAOs

Click to Expand: Best Practices and Risks

Executive Summary

An effective Code of Conduct for a decentralized governance framework, such as the House of Stake (HoS), must proactively balance core principles of human rights, transparency, and accountability with the technical and social realities of digital governance. As we have observed through a comprehensive literature review, the successful implementation of such a code requires a multi-faceted approach that not only establishes clear rules and processes but also anticipates and mitigates significant risks, particularly those related to automation, bias, and power imbalances. The following analysis presents a synthesized framework of best practices and potential pitfalls to inform the development of a robust and sustainable Code of Conduct for HoS.

Methodology

We employed a research framework derived from Governable Spaces, UNESCO’s guidelines for digital platform governance, and scholarly literature on responsible AI. The scope of our inquiry was limited to peer-reviewed articles, institutional reports, and research from recognized non-governmental organizations (NGOs), with a preference for works published within the last five years. Our review focused on identifying best practices for developing and implementing codes of conduct, as well as common risks and systemic failures. Key sources included the Santa Clara Principles on transparency in content moderation, recommendations from Ranking Digital Rights, UNESCO’s governance guidelines, and academic studies on algorithmic content moderation. The synthesis groups these insights thematically.

Best-Practice Principles

Principle Description Evidence
Human-rights and due-process orientation A Code of Conduct should embed international human rights principles, such as freedom of expression and non-discrimination, and ensure procedural fairness in all stages of content moderation and sanctioning. The Santa Clara Principles advocate for integrating human rights and due-process considerations into all moderation processes and for making these processes publicly transparent. Ranking Digital Rights recommends that organizations conduct human-rights due diligence and provide grievance mechanisms that respect user rights. Santa Clara Principles; Ranking Digital Rights.
Clear, understandable rules and scope Rules should be articulated in plain language and include clear examples of permissible and impermissible content. The scope of the code should be explicitly defined to prevent misuse and arbitrary enforcement. Santa Clara Principles.
Cultural competence and inclusivity Moderation and governance bodies should possess a deep understanding of the cultural and social context of the communities they serve. Policies and processes should be available in multiple languages, and moderation teams should reflect the diversity of the user base. Santa Clara Principles.
Transparency and accountability Codes should commit to transparent governance by publishing detailed statistics on content removals, sanctions, and appeals, and by providing clear explanations for all moderation decisions. The Santa Clara Principles require comprehensive reporting on enforcement actions, while Ranking Digital Rights recommends regular transparency reports. Santa Clara Principles; Ranking Digital Rights.
Integrity, proportionality and explainability Moderation systems, whether human or automated, must operate reliably and without bias. The Santa Clara Principles call for regular assessments of algorithmic systems, data sharing on accuracy, and clear explanations of automated decisions. Sanctions should be proportional to the offense. Santa Clara Principles.
Participatory and amendable governance Codes of Conduct should be developed and amended through participatory, multi-stakeholder processes. UNESCO’s guidelines emphasize the importance of institutionalized checks and balances and open access for marginalized groups. Research on DAO constitutions recommends that governance documents be digital, accessible, and amendable early in a project’s life cycle. UNESCO; DAO Research Collective.
Accessible appeals and restorative options Timely and accessible appeal mechanisms are crucial for procedural justice. The Santa Clara Principles underscore the need for understandable notices and appeals, while Ranking Digital Rights notes that effective grievance mechanisms are essential for human-rights compliance. Santa Clara Principles; Ranking Digital Rights.
Education and media literacy Digital communities should invest in onboarding and continuous education to foster shared norms and promote media and information literacy. UNESCO’s guidelines highlight these programs as a means to empower users and reduce participation gaps. UNESCO.
Privacy and data protection Codes must ensure the protection of user data during reporting and investigation. Ranking Digital Rights advocates for strong privacy governance, including data minimization, encryption, and transparency about data use. Ranking Digital Rights.
Responsible AI & algorithmic transparency If AI systems are employed for moderation, the code should commit to transparency, explainability, fairness, and non-discrimination. Ethical AI guidelines emphasize that individuals affected by AI-driven decisions should understand the rationale and that human oversight is essential to mitigate bias. Transcend.
Modularity and subsidiarity Governance should be modular and respect the principle of subsidiarity, with decisions made at the most local and appropriate level. Governable Spaces argues that modular, context-sensitive governance allows communities to adapt and that subsidiarity delegates authority to those most affected by the decisions. Schneider; UNESCO.

Risks and Pitfalls

Risk Description Evidence
Vague or overbroad rules enabling abuse Vaguely defined rules can enable arbitrary enforcement and be weaponized by powerful actors within a community. The Santa Clara Principles warn that opaque policies undermine trust and due process. Santa Clara Principles.
Inconsistency and arbitrariness in enforcement Algorithmic content moderation can lead to inconsistent and arbitrary outcomes for identical content. Research shows that machine learning models can produce conflicting decisions based on random training parameters, thereby undermining procedural justice. Gómez et al., 2024.
Bias and discrimination Both algorithmic and human moderation can disproportionately affect marginalized groups. Studies have demonstrated disparate impacts across demographics, and the Centre for International Governance Innovation notes that automated moderation, which often relies on keyword filtering, can inadvertently censor marginalized communities while failing to detect more subtle harms. Gómez et al., 2024; CIGI.
Opacity and lack of accountability Many platforms lack transparency regarding moderation decisions, hindering accountability. Predictive multiplicity in AI models makes it challenging to determine which algorithm produced a decision, complicating appeals. The outsourcing of automated moderation can further obscure accountability. Gómez et al., 2024; CIGI.
Over-reliance on automation Over-dependence on AI tools leads to rapid but error-prone moderation. Mandated removal deadlines often compel platforms to automate, increasing the risk of unjustified takedowns. The Centre for International Governance Innovation warns that expanded automated moderation could suppress political dissent. Gómez et al., 2024; CIGI.
Weaponization and retaliatory reporting Without adequate safeguards, reporting systems can be abused to harass opponents or silence dissenting voices. The Santa Clara Principles and UNESCO guidelines caution that broad enforcement powers and insufficient oversight enable abusive or discriminatory enforcement. UNESCO; Santa Clara Principles.
Lack of cultural competence Moderators unfamiliar with the languages and cultural contexts of their communities may misinterpret speech, leading to unjust censorship. Automated tools often struggle with low-resource languages, disproportionately flagging content from these communities. CIGI.
Chilling effects and self-censorship Overly restrictive codes and severe penalties may discourage members from engaging in healthy dissent or expressing controversial opinions. This risk is amplified when appeals processes are inaccessible. While not a primary focus of the reviewed literature, this remains a recurring concern in digital governance discourse. Inferred from general governance literature and UNESCO’s emphasis on human rights.
Absence of appeals and restorative mechanisms The lack of accessible appeals and restorative options can lead to unjust penalties and perpetuate exclusion. The Santa Clara Principles emphasize the necessity of meaningful appeal processes to uphold procedural justice. Santa Clara Principles.
Power imbalances and centralization Centralized enforcement structures without sufficient checks and balances can concentrate power in the hands of a few moderators or leaders. UNESCO’s guidelines warn that governance frameworks require institutionalized checks and diverse expertise to prevent abuse of power. UNESCO.
Data privacy risks Reporting systems frequently collect sensitive information. Without robust data protection measures, such as encryption and data minimization, the system may expose the personal information of users, thereby putting them at risk. Ranking Digital Rights advocates for strong privacy governance and user control over data. Ranking Digital Rights.
Rigid, non-modular policies Static or overly prescriptive codes are unable to adapt to evolving community norms. Governable Spaces argues that modular, flexible governance better accommodates changing needs and fosters legitimacy, whereas inflexible codes risk becoming obsolete and losing community support. Schneider; DAO Research Collective.

Conclusion

The findings from this review underscore that effective Codes of Conduct for DAOs and digital communities must be designed with human rights, clarity, inclusivity, transparency, and ethical AI at their core. Best practices, as highlighted by the Santa Clara Principles and Ranking Digital Rights, prioritize human-rights due diligence, clear rules, cultural competence, and accessible appeals. Foundational governance principles from UNESCO and Governable Spaces stress multi-stakeholder participation, subsidiarity, and modularity. In contrast, emerging ethical-AI guidelines from Gabriel (2020) and Transcend (2024) highlight the critical need for transparency and explainability in automated systems. Furthermore, academic research on algorithmic content moderation, such as that by Gómez et al. (2024) and the Centre for International Governance Innovation (2021), warns of the inherent risks of bias, arbitrary enforcement, and opacity. These insights should inform the House of Stake’s Code of Conduct, ensuring it is a document that is not only aligned with its principles but also transparent, accountable, efficient, inclusive, and responsive.

3 Likes

Continued…

References for Best Practices and Risks

References

Centre for International Governance Innovation. (2021). Algorithms and the control of speech: How platform governance is failing under the weight of AI. https://www.cigionline.org/articles/algorithmic-content-moderation-brings-new-opportunities-and-risks/

DAO Research Collective. (2023). Constitutions of Web3: A comparative study of DAO governance documents. https://arxiv.org/pdf/2403.00081

Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2

Gómez, V., Blumenschein, K., & Giampietro, A. (2024). Predictive multiplicity and arbitrariness in content moderation. Journal of Online Trust and Safety. https://arxiv.org/abs/2402.16979

Ranking Digital Rights. (2022). RDR Corporate Accountability Index. https://rankingdigitalrights.org/its-the-business-model/

Santa Clara Principles. (2018). Santa Clara Principles on Transparency and Accountability in Content Moderation. https://santaclaraprinciples.org/

Schneider, N. (2023). Governable spaces: Democratic design for online communities. University of California Press. https://www.ucpress.edu/book/9780520393950/governable-spaces

Transcend. (2024). Key principles for ethical AI development. https://transcend.io/blog/ai-ethics

UNESCO. (2023). Guidelines for the governance of digital platforms. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000387339

3. Comparative Matrix: Existing CoCs Evaluated Against the Governable Spaces Framework

Click to Expand: Comparative Matrix

This matrix evaluates DAO, open-source, hackathon, and institutional Codes of Conduct (CoCs) against core governance dimensions drawn from Governable Spaces (Schneider, 2023), Constitutions of Web3 (Tan et al., 2024), and UNESCO (2023). It integrates institutional anchors such as Brookings (2022), Transparency International (2021), and Global Fund (2021).

Code of Conduct Legitimacy & Consent Representation & Inclusivity Accountability & Feedback Transparency & Disclosure Enforcement & Due Process Restorative & Appeals Education & Accessibility Strengths Weaknesses
Uniswap DAO (2023) Medium — RFC posted for feedback High — clear anti-harassment norms Medium — COI disclosure encouraged Medium — rationale visibility Low — no enforcement ladder Low — no appeals process Low — minimal onboarding Professionalism, COI disclosure Weak enforcement, no appeals
Arbitrum DAO (2023) Medium — Foundation-led structure High — inclusive language and tone Medium — professionalism standards Medium — transparency on roles Medium — sanctions defined Low — no restorative justice Medium — some cultural awareness Clear sanctions, tone of inclusion Appeals system unclear
Optimism (2023) Medium — Council-based experiment Medium — scoped by grant oversight Medium — COI handling in grant issues Medium — Rules of Engagement Medium — some enforceability via platform Low — no public appeals system Low — limited documentation access Governance minimization, grant focus Weak procedural transparency
Scroll Foundation (2023) Medium — delegated representation Medium — standard participation norms Medium — COI transparency rules Medium — public responsibilities posted Low — lacks public enforcement steps Low — no formal appeals Low — accessibility not addressed Delegate professionalism No enforcement or appeal detail
ZK Nation (2023) Medium — issued by foundation High — strong anti-discrimination policy Medium — public values alignment Medium — publication of CoC Low — unclear enforcement Low — no mention of appeals Low — accessibility not specified Clear values language No enforcement roadmap
NDC Transparency Commission (2023) Medium — unclear drafting process High — multilingual & diverse access Medium — conflict disclosure encouraged Medium — partial transparency Medium — informal sanctions listed Low — appeals process absent Medium — some accessibility emphasis Cultural awareness, diversity Weak procedural structure
Contributor Covenant v3.0 (2023) Medium — widely used template High — clear anti-discrimination High — supports feedback reporting Medium — commits to openness High — detailed enforcement ladder Medium — remediation referenced Medium — translated versions offered Excellent behavioral clarity No attention to power dynamics
Django CoC (2023) Medium — issued by foundation High — inclusivity & respect focused Medium — accountability encouraged Medium — partial transparency High — structured enforcement ladder Medium — allows for apologies Low — limited accessibility tools Transparent enforcement steps Weak on multilingual support
Creative Commons (2020) Medium — mission-aligned code High — inclusion explicitly stated Medium — professional behavior encouraged Medium — clarity in guidelines Medium — enforcement stated but light Low — no appeals mechanism Medium — basic accessibility included Simple and accessible Lack of enforcement tiers
Hack Humanity Hackathon (2025a)** Low — internal and informal Medium — clear safety language Low — discretionary enforcement Low — rules not fully documented Medium — enforced by organizers Low — appeal not guaranteed Low — depends on on-site staff Safety, no-tolerance policy No participatory legitimacy
ArbGovHack T&C (2025b)** Low — private agreement-based Medium — respect and fairness stated Low — no disclosure paths Low — enforcement is discretionary Low — rules briefly mentioned Low — appeal or mediation absent Low — no multilingual options Basic integrity language Weak transparency

Methodology

The matrix uses the analytical lens from:

And compares CoCs using governance principles from:

Observations

  1. DAO CoCs show progress in inclusivity and delegate professionalism (e.g., Scroll, ZK Nation), but lag on enforcement, appeals, and procedural transparency—elements central to democratic legitimacy.

  2. Open-source models (e.g., Contributor Covenant, Django) offer strong behavioral clarity and detailed sanction ladders, yet often neglect participatory ratification and power imbalance concerns.

  3. Hackathon codes prioritize physical safety and organizer discretion, but lack due process, appeals, and long-term governance principles. They serve as risk management tools more than community governance frameworks.

  4. Institutional anchoring remains rare in DAO-native codes. Most documents do not cite external governance or rights-based standards, weakening legitimacy in multisectoral contexts.

  5. Transparency and accountability mechanisms—like public statistics, audit trails, and AI explainability—are missing in nearly all CoCs reviewed, despite increased reliance on automation.

  6. Appeals and restorative pathways remain a major gap across all formats. Only open-source communities occasionally mention apology or remediation.

  7. The House of Stake CoC combines the procedural rigor of open-source with participatory intent of DAOs, and institutional safeguards, positioning it as a next-generation model.

References for Comparative Matrix

References

4. Synthesis Memo – Insights from Comparative Analysis of CoCs

Click to Expand: Synthesis Memo

Executive Summary

Across DAO, open‑source, hackathon, and institutional CoCs, we observe strong cultural norms (respect, inclusion) but weak procedural justice (e.g., vague enforcement, lack of appeals). Delegate‑oriented DAO codes (e.g., Uniswap DAO, 2023; Scroll Foundation, 2023) emphasize disclosure and professionalism, while open-source models (Contributor Covenant, 2023; Django Software Foundation, 2023) offer more precise enforcement frameworks. Hackathon codes center on safety and organizer control, but lack legitimacy mechanisms ([Hack Humanity, 2025a]; [Hack Humanity, 2025b]). Institutional anchors—such as UNESCO (2023), Brookings (2022), Transparency International (2021), and Santa Clara Principles (2018)—provide actionable standards for due process, transparency, redress, and accountability.

For HoS: an effective CoC must combine DAO-native legitimacy, open-source enforcement clarity, and institutional due-process frameworks—with a roadmap for automation ethics.

Methodology (Dual‑Anchor)

Institutional Anchors

Literature Anchors

Comparative Base

Core Patterns Across Existing Codes

1. Universal Emphasis on Respect & Inclusion

Open-source and DAO CoCs promote harassment-free, inclusive environments (Contributor Covenant, 2023; Schneider, 2023).

2. Weak Enforcement & Procedural Justice

DAO CoCs lack enforcement ladders or appeals mechanisms. Open-source codes provide better tools, but lack legitimacy frameworks (UNESCO, 2023).

3. Transparency & Conflicts of Interest (COI)

Some DAOs lead on COI disclosures (e.g., Uniswap; Scroll), but few include reporting requirements or public statistics (Brookings, 2022).

4. Limited Participatory Legitimacy

Most codes are foundation-issued; few show public input or ratification (Tan et al., 2024).

5. Neglect of AI Governance

Automation is mentioned rarely. Where used, there’s little attention to explainability, auditability, or human oversight (Gabriel, 2020).

Implications for HoS (Design Choices)

Legitimacy & Consent

  • Include a ratification plan, change logs, and public comment cycles.

Enforcement & Due Process

  • Define a graduated enforcement ladder, with evidence thresholds and timeline targets.

  • Embed appeals and cultural competence mechanisms (Django, 2023; UNESCO, 2023).

COI & Transparency

Accessibility & Representation

AI Governance

  • If AI tools are used (e.g., for triage), enforce human-in-the-loop, auditability, and override protocols (Transcend, 2023; CIGI, 2021).

Recommendations

  • Add “Appeals & Remediation” section with clear standing and review steps

  • Publish “Sanctions Ladder” aligned with severity tiers

  • Require COI disclosures and publish aggregated vote rationales

  • Add Transparency Report clause (quarterly or annual anonymized cases)

  • Embed AI Policy appendix (oversight, contestability, audit trails)

  • Translate key sections into 5–10 major NEAR languages

  • Document amendment and ratification cycle

Risk Description Mitigation
Elite Capture Stake-weighted votes may entrench incumbents Rotating roles, term limits, minority appeals Schneider, 2023
Opacity Lack of clarity on enforcement undermines legitimacy Publish processes, issue transparency reports Santa Clara Principles, 2018
Automation Bias AI tools lack oversight or transparency Require human oversight, audits, appeals Gabriel, 2020
Cultural Blind Spots Rules may misalign with diverse NEAR communities Require linguistic/cultural representation UNESCO, 2023
Overregulation Complex rules deter engagement Provide plain-language summaries and visuals Contributor Covenant, 2023
References for Synthesis Memo

References

Arbitrum DAO. (2023). Arbitrum DAO code of conduct. Arbitrum Foundation Forum. https://forum.arbitrum.foundation/t/the-arbitrum-dao-code-of-conduct/29713

Brookings. (2022). Transparency as the first step to better digital governance. https://www.brookings.edu/articles/transparency-is-the-best-first-step-towards-better-digital-governance/

Centre for International Governance Innovation. (2021). Algorithms and the control of speech: How platform governance is failing under the weight of AI. https://www.cigionline.org/articles/algorithmic-content-moderation-brings-new-opportunities-and-risks/

Contributor Covenant. (2023). Contributor Covenant: A code of conduct for open source projects (Version 3.0). https://www.contributor-covenant.org/version/3/0/code_of_conduct/

Creative Commons. (2020). Creative Commons code of conduct. https://creativecommons.org/code-of-conduct/

Django Software Foundation. (2023). Django community code of conduct: Enforcement manual. https://www.djangoproject.com/conduct/

Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2

Global Fund. (2021). Code of conduct for governance officials. https://www.theglobalfund.org/media/4293/core_codeofethicalconductforgovernanceofficials_policy_en.pdf

Hack Humanity. (2025a). Hack Humanity governance hackathon code of conduct. Personal communication, August 18, 2025.

Hack Humanity. (2025b). ArbGovHack terms & conditions. Personal communication, August 18, 2025.

Optimism Collective. (2023). Code of conduct. Optimism Governance Forum. https://gov.optimism.io/t/code-of-conduct/5751

Ranking Digital Rights. (2022). RDR Corporate Accountability Index. https://rankingdigitalrights.org/its-the-business-model/

Santa Clara Principles. (2018). Santa Clara principles on transparency and accountability in content moderation. https://santaclaraprinciples.org/

Schneider, N. (2023). Governable spaces: Democratic design for online communities. University of California Press. https://www.ucpress.edu/book/9780520393950/governable-spaces

Scroll Foundation. (2023). Delegate & voter code of conduct. https://scroll.io/gov-docs/content/delegate-voter-code-of-conduct

Tan, J., Angeris, G., Chitra, T., & Karger, D. (2024). Constitutions of Web3: A comparative study of DAO governance documents. arXiv. https://arxiv.org/pdf/2403.00081

Transcend. (2023). Key principles for ethical AI development. Key principles for ethical AI development | Transcend | Data Privacy Infrastructure

Transparency International. (2021). Our principles. https://www.transparency.org/en/the-organisation/mission-vision-values

UNESCO. (2023). Guidelines for the governance of digital platforms. https://unesdoc.unesco.org/ark:/48223/pf0000387339

Uniswap DAO. (2023). RFC: Delegate code of conduct. Uniswap Governance Forum. https://gov.uniswap.org/t/rfc-delegate-code-of-conduct/20913

ZKsync Association. (2023). ZK Nation code of conduct. https://docs.zknation.io/zk-nation-community/zk-nation-code-of-conduct

Continued…

5. Evidence‑to‑Policy Mapping

Click to Expand: Evidence‑to‑Policy Mapping

Purpose

This table serves as a bridge between principles and practice—mapping each section of the House of Stake (HoS) Code of Conduct to:

  1. Governance framework dimensions (e.g. legitimacy, transparency, inclusion)

  2. Supporting research from institutional, academic, and Web3 sources

  3. Precedents from DAO, open-source, and community codes

The goal is to anchor HoS provisions in auditable sources of legitimacy—ensuring each clause is not only normatively grounded, but also pragmatically defensible and informed by ecosystem-wide learning.

This is especially vital in the NEAR ecosystem, where governance innovation must balance on-chain decentralization, off-chain community values, and institutional trustworthiness.

CoC Section Framework Dimension(s) Supporting Research / Evidence DAO / Community Precedent
Methodology & Evidence Base Legitimacy & Transparency. Transparent, reasoned deliberation legitimizes policy. UNESCO, 2023; [Hack Humanity, 2025b] Arbitrum DAO, 2023; Uniswap DAO, 2023; Contributor Covenant, 2023
Purpose & Values Legitimacy, Representation & Inclusivity. Establishes normative baseline. NEAR Core, 2021; Mozilla Foundation, 2021; [Hack Humanity, 2025a] Contributor Covenant, 2023; Uniswap DAO, 2023; Arbitrum DAO, 2023
Scope Modularity & Subsidiarity; Representation. Boundaries enable scaling. UNESCO, 2023 NDC Transparency Commission, 2023; Django, 2023; Scroll, 2023
Definitions Transparency & Accountability. Clarifies roles and scope. UNESCO, 2023; [Hack Humanity, 2025b] Contributor Covenant, 2023; Django, 2023; NDC Transparency Commission, 2023
agreed Behaviors Representation & Inclusivity; Education. Encourages norms. Mozilla Foundation, 2021; NEAR Core, 2021; [Hack Humanity, 2025a] Contributor Covenant, 2023; ZKsync, 2023
Unacceptable Behaviors Accountability & Enforcement. Establishes guardrails. CIGI, 2021; UNESCO, 2023; [Hack Humanity, 2025a] Optimism, 2023; Creative Commons, 2020; Uniswap DAO, 2023
Reporting Channels Transparency & Accessibility. Trust depends on access. Django, 2023; UNESCO, 2023; [Hack Humanity, 2025a] Contributor Covenant, 2023; Django, 2023
Intake & Triage Accountability & Procedural Justice. Predictable flow. Django, 2023 Django, 2023; Rare in DAOs
moderation standards Procedural Justice & Representation. Fair handling. UNESCO, 2023; Schneider, 2023; [Hack Humanity, 2025a] Django, 2023; Optimism, 2023
Sanctions Ladder Proportionality & Restorative Justice. Tiered consequences. Contributor Covenant, 2023; [Hack Humanity, 2025a] ZKsync, 2023; Arbitrum DAO, 2023
Appeals & Review Legitimacy & Accountability. Enables correction. UNESCO, 2023; Schneider, 2023 Django, 2023; Optimism, 2023
Anti-Retaliation Protection & Inclusivity. Enables safe reporting. UNESCO, 2023 Contributor Covenant, 2023; NEAR Core, 2021
Accessibility & Inclusion Inclusivity & Education. Translation, onboarding. UNESCO, 2023; [Hack Humanity, 2025a] ZKsync, 2023; NEAR Foundation, 2023
Cultural & Jurisdictional Awareness Subsidiarity & Cultural Competence. Local variance. UNESCO, 2023 Creative Commons, 2020
Power Imbalances & Conflict of Interest Equity & Accountability. COI disclosures. OECD, 2014; Schneider, 2023 Arbitrum DAO, 2023; Scroll, 2023
Data Protection & Privacy Privacy & Trust. Protects individuals. UNESCO, 2023; [Hack Humanity, 2025b] Uniswap DAO, 2023; NEAR Foundation, 2023
Education & Onboarding Education & Improvement. Tools and learning. UNESCO, 2023; Schneider, 2023 ZKsync, 2023; Django, 2023
Transparency & Reporting Accountability & Transparency. Public data. UNESCO, 2023 Optimism, 2023; Creative Commons, 2020
Governance & Amendments Legitimacy & Modularity. Rules for change. UNESCO, 2023; Schneider, 2023; Tan et al., 2024 Uniswap DAO, 2023; Arbitrum DAO, 2023; Django, 2023
Interoperability Modularity & Portability. Cross-project synergy. Schneider, 2023; UNESCO, 2023 Scroll, 2023; Creative Commons, 2020
Versioning & Changelog Transparency & Improvement. History of change. UNESCO, 2023 Contributor Covenant, 2023; Django, 2023
AI Ethics & Agent Regulation Accountability & Fairness. AI explainability, oversight. Gabriel, 2020; Transcend, 2023 Pioneered in HoS; few DAO precedents

Conclusion: Insights from the Mapping

Top-Level Insight
The HoS Code of Conduct is well-aligned with global governance standards, drawing from both institutional rigor and DAO-native practices.

Key Takeaways

  • Most CoCs lack procedural depth—HoS adds missing layers: moderation standards, triage, appeals, changelogs.

  • HoS is one of the few frameworks to explicitly integrate AI governance, informed by academic and civic literature.

  • The use of modular, interoperable, and ratifiable design patterns reflects best-in-class institutional + Web3 synthesis.

Strategic Opportunity
HoS can position itself not just as a NEAR-specific standard, but as a model governance instrument for decentralized, community-led systems—setting a precedent for legitimacy, accountability, and evolution-by-design.

References for Evidence‑to‑Policy Mapping

References

Arbitrum DAO. (2023). Arbitrum DAO code of conduct. Arbitrum Foundation Forum. https://forum.arbitrum.foundation/t/the-arbitrum-dao-code-of-conduct/29713

Centre for International Governance Innovation. (2021). Algorithms and the control of speech: How platform governance is failing under the weight of AI. https://www.cigionline.org/articles/algorithmic-content-moderation-brings-new-opportunities-and-risks/

Contributor Covenant. (2023). Contributor Covenant: A code of conduct for open source projects (Version 3.0). https://www.contributor-covenant.org/version/3/0/code_of_conduct/

Creative Commons. (2020). Creative Commons code of conduct. https://creativecommons.org/code-of-conduct/

Django Software Foundation. (2023). Django community code of conduct: Enforcement manual. https://www.djangoproject.com/conduct/

Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2

Hack Humanity. (2025a). Hack Humanity governance hackathon code of conduct. Personal communication, August 18, 2025.

Hack Humanity. (2025b). ArbGovHack terms & conditions. Personal communication, August 18, 2025.

Mozilla Foundation. (2021). Mozilla community participation guidelines. https://www.mozilla.org/en-US/about/governance/policies/participation/

NEAR Core. (2021). Community etiquette. NEAR Governance Forum. https://gov.near.org/t/community-etiquette/4417

NEAR Foundation. (2023). Community guidelines. NEAR Governance Forum. https://gov.near.org/t/community-guidelines/5

NDC Transparency Commission. (2023). NDC code of conduct by transparency commission. NEAR Governance Forum. https://gov.near.org/t/ndc-code-of-conduct-by-transparency-commission/36780

OECD. (2014). Recommendation of the Council on Digital Government Strategies. OECD Publishing. OECD Legal Instruments

Optimism Collective. (2023). Code of conduct. Optimism Governance Forum. https://gov.optimism.io/t/code-of-conduct/5751

Schneider, N. (2023). Governable spaces: Democratic design for online communities. University of California Press. https://www.ucpress.edu/book/9780520393950/governable-spaces

Scroll Foundation. (2023). Delegate & voter code of conduct. https://scroll.io/gov-docs/content/delegate-voter-code-of-conduct

Tan, J., Angeris, G., Chitra, T., & Karger, D. (2024). Constitutions of Web3: A comparative study of DAO governance documents. arXiv. https://arxiv.org/pdf/2403.00081

Transcend. (2023). Key principles for ethical AI development. https://transcend.io/blog/ai-ethics

UNESCO. (2023). Guidelines for the governance of digital platforms. https://unesdoc.unesco.org/ark:/48223/pf0000387339

Uniswap DAO. (2023). RFC: Delegate code of conduct. Uniswap Governance Forum. https://gov.uniswap.org/t/rfc-delegate-code-of-conduct/20913

ZKsync Association. (2023). ZK Nation code of conduct. https://docs.zknation.io/zk-nation-community/zk-nation-code-of-conduct

1 Like

Continued…

6. Design Rationale for the House of Stake Code of Conduct

Click to Expand: Design Rationale

The HoS Code of Conduct (CoC) is designed as a dual-anchored governance instrument: it fuses institutional due-process standards and DAO/open-source practice to deliver enforceable, legitimate, and culturally competent rules for NEAR’s stake-weighted governance. We privilege clear procedures (intake → investigation → sanctions → appeal), transparency (COI, stats, changelog), and inclusion (language, accessibility, culture), with human-in-the-loop AI safeguards.
Sources: UNESCO, 2023Brookings, 2022Transparency International, 2021Global Fund, 2021Santa Clara Principles, 2018Schneider, 2023Tan et al., 2024Gabriel, 2020Transcend, 2023

Key design theses

  1. Legitimacy = procedure + participation: publish how rules are made, enforced, and revised; enable ratification.

  2. Clarity beats discretion: standardize definitions, intake, investigations, sanction ladder, and appeals.

  3. Transparency is default: COI registry, vote rationales, annual CoC statistics and changelog.

  4. Inclusion is operational: multilingual access, cultural/linguistic competence, accessibility accommodations.

  5. AI augments, not replaces: explainable, auditable, contestable, and always human-overseen.

1) Methodology & Evidence Base

Rationale. A dual-anchor approach—institutional norms + governance scholarship—avoids both technocratic overreach and community parochialism (UNESCO, 2023; Tan et al., 2024).
What HoS implements. Methods section in the draft details sources (UNESCO/Brookings/Global Fund; Schneider/Tan/Gabriel/Transcend/OECD/CIGI) and DAO codes (Uniswap/Arbitrum/Optimism/Scroll/ZKsync), plus NEAR precedents (NEAR Guidelines, NDC CoC, NEAR Community Etiquette).
Trade-offs & mitigation. Academic abstraction vs. operability → we provide concrete playbooks (intake/triage, severity matrix); community bias vs. rigor → institutional anchors and public RFC cycles.

2) Purpose & Values

Rationale. Values orient interpretation across ambiguous cases and unify sub-DAOs: transparency, integrity, inclusion, safety, professionalism, and responsible tech use (Global Fund, 2021; Transparency International, 2021; Schneider, 2023).
In the draft. We framed “Ecosystem-first, Openness, Accessibility, Growth, Professionalism, Safety, Ethical Tech” drawing on NEAR Foundation Guidelines, 2023, NEAR Core, 2021, UNESCO, 2023, and Transcend, 2023.
Trade-offs. Cultural variance → translations, cultural-competence training, and periodic review.

3) Scope & Definitions

Rationale. Scope = where rules apply; definitions = who is bound and how. These reduce arbitrariness and aid portability across forums and on-chain actions (UNESCO, 2023; OECD, 2014).
In the draft. Covered on-chain (proposal, screening, stake-weighted voting, treasury), off-chain (forums, Discord/Telegram, calls, GitHub), events (hackathons/AMAs), with roles (members, delegates, moderators, contractors, screening committee).

4) Agreed & Unacceptable Behaviours

Rationale. Positive norms + clear prohibitions reduce discretion and signaling ambiguity (Mozilla, 2021; Contributor Covenant, 2023; Django, 2023).
In the draft. We list exemplary behaviors (reasoned voting, COI disclosure, respectful comms) and prohibitions (harassment, doxxing, fraud, retaliation). DAO analogs: Uniswap, 2023, Arbitrum, 2023, Optimism, 2023, ZKsync, 2023.
Mitigation. Treat lists as illustrative; rely on severity matrix and precedent.

5) Reporting Channels & Intake

Rationale. Lowering friction increases reporting; standardized intake advances procedural justice (Django, 2023; UNESCO, 2023).
In the draft (used). Channels: encrypted form, coc@houseofstake.org, direct moderators at events/calls. Intake standards: acknowledgement of report receivement, categorize by harassment/fraud/COI/operational dispute, urgency assessment, and confidentiality.
Trade-offs. Anonymity vs. verification → gated intake, audit logs.

6) Moderation Standards

Rationale. Impartiality, cultural/linguistic competence, secure record-keeping, and timeliness are due-process pillars (UNESCO, 2023; Santa Clara Principles, 2018).
In the draft. Impartial investigators with COI recusals; target 14-day resolution (extensions documented); secure documentation; AI-assisted tooling under human oversight.
Trade-offs. Ambitious timelines → allow justified extensions with transparency.

7) Sanctions Ladder

Rationale. Proportional, predictable consequences deter harm while enabling restoration (UNESCO, 2023; Contributor Covenant, 2023).
In the draft. Warning → moderated participation → temporary suspension → removal, with a severity matrix (intent, impact, history, cooperation). Restorative options (apologies, mediated resolution) when safe.

8) Appeals & Review

Rationale. Checks and balances reinforce legitimacy and learning (UNESCO, 2023; Tan et al., 2024).
In the draft. One-round time-bound appeal to an independent panel; decisions summarized (privacy-preserving) in transparency reports.
Trade-offs. Process load → limit scope and set deadlines.

9) Anti-Retaliation

Rationale. Without anti-retaliation rules, reporting chills and harms persist.
In the draft. Explicit prohibition on retaliation against reporters/witnesses; penalties for bad-faith reports.

10) Accessibility & Inclusion

Rationale. Inclusion is operational: languages, accessibility accommodations, cultural competence (UNESCO, 2023; Schneider, 2023).
In the draft. Commitment to translations, plain-language summaries, and diversified panels. NEAR precedents: NEAR Foundation, 2023; NDC CoC, 2023.

11) Cultural & Jurisdictional Awareness

Rationale. Respect for local law and culture supports legitimacy in a global community (UNESCO, 2023).
In the draft. Jurisdiction-aware guidance (e.g., defamation/privacy variance), with modular adoption by sub-communities. Open community analog: Creative Commons, 2020.

12) Power Imbalances & Conflicts of Interest

Rationale. Stake-weight amplifies capture risks; disclosure + recusal are baseline.
In the draft. COI registry, vote rationales; delegates disclose affiliations/holdings—aligned with Uniswap, 2023, Arbitrum, 2023, Scroll, 2023, and integrity norms from Transparency International, 2021.

13) Data Protection & Privacy

Rationale. Data minimization, secure storage, breach transparency, and due-process in data use build trust (UNESCO, 2023; Santa Clara Principles—“Numbers/Notice/Appeal”, 2018).
In the draft. Minimal collection during reports; strict access controls; retention schedule; anonymized case reporting.

14) Education & Onboarding

Rationale. Ongoing training reduces incidents and increases governance quality (UNESCO, 2023; Schneider, 2023).
In the draft. Orientation pack, moderator handbook, and scenario-based examples (e.g., handling COI, harassment, vote-buying).

15) Transparency & Reporting

Rationale. Publishing aggregated metrics and budget/time data drives accountability and learning (Brookings, 2022; Santa Clara Principles, 2018).
In the draft. Annual CoC Transparency Report (cases, outcomes, timelines, COI disclosures), and public changelog. (RDR can inform indicator design: Ranking Digital Rights, 2022.)

16) Governance & Amendments

Rationale. Ratification, amendment cadence, and changelogs operationalize consent and adaptability (Tan et al., 2024; Schneider, 2023).
In the draft. Public RFC, feedback window, snapshot-style ratification vote, and annual review. Precedent: Uniswap/Arbitrum RFCs.

17) Interoperability & Versioning

Rationale. Modularity and open licensing speed cross-DAO learning while preserving attribution (Schneider, 2023; Creative Commons, 2020).
In the draft. Version tags and diffable changelog; portability notes for DAOs adopting the HoS CoC.

18) AI Ethics & Agent Regulation

Rationale. When AI assists moderation, it must be explainable, auditable, contestable, and human-overseen (Gabriel, 2020; Transcend, 2023; CIGI, 2021).
In the draft. Human-in-the-loop policy; model-usage disclosure; appeal path for AI-flagged decisions; audit logs.

References for Design Rationale

References

Arbitrum DAO. (2023). Arbitrum DAO code of conduct. https://forum.arbitrum.foundation/t/the-arbitrum-dao-code-of-conduct/29713

Brookings. (2022). Transparency as the first step to better digital governance. https://www.brookings.edu/articles/transparency-is-the-best-first-step-towards-better-digital-governance/

Centre for International Governance Innovation. (2021). Algorithms and the control of speech. https://www.cigionline.org/articles/algorithmic-content-moderation-brings-new-opportunities-and-risks/

Contributor Covenant. (2023). Version 3.0. https://www.contributor-covenant.org/version/3/0/code_of_conduct/

Creative Commons. (2020). Code of conduct. https://creativecommons.org/code-of-conduct/

Django Software Foundation. (2023). Conduct & enforcement manual. https://www.djangoproject.com/conduct/

Gabriel, I. (2020). Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2

Global Fund. (2021). Code of conduct for governance officials. https://www.theglobalfund.org/media/4293/core_codeofethicalconductforgovernanceofficials_policy_en.pdf

NEAR Core. (2021). Community etiquette. https://gov.near.org/t/community-etiquette/4417

NEAR Foundation. (2023). Community guidelines. https://gov.near.org/t/community-guidelines/5
NDC Transparency Commission. (2023). NDC code of conduct. https://gov.near.org/t/ndc-code-of-conduct-by-transparency-commission/36780

OECD. (2014). Recommendation on Digital Government Strategies.https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0406

Optimism Collective. (2023). Code of conduct. https://gov.optimism.io/t/code-of-conduct/5751

Ranking Digital Rights. (2022). Corporate Accountability Index. https://rankingdigitalrights.org/its-the-business-model/

Santa Clara Principles. (2018). Transparency & Accountability in Content Moderation. https://santaclaraprinciples.org/

Schneider, N. (2023). Governable Spaces. https://www.ucpress.edu/book/9780520393950/governable-spaces

Scroll Foundation. (2023). Delegate & voter code of conduct. https://scroll.io/gov-docs/content/delegate-voter-code-of-conduct

Tan, J., Angeris, G., Chitra, T., & Karger, D. (2024). Constitutions of Web3. https://arxiv.org/pdf/2403.00081

Transparency International. (2021). Our principles. https://www.transparency.org/en/the-organisation/mission-vision-values

UNESCO. (2023). Guidelines for the governance of digital platforms. https://unesdoc.unesco.org/ark:/48223/pf0000387339

Uniswap DAO. (2023). RFC: Delegate code of conduct. https://gov.uniswap.org/t/rfc-delegate-code-of-conduct/20913

ZKsync Association. (2023). ZK Nation code of conduct. https://docs.zknation.io/zk-nation-community/zk-nation-code-of-conduct

7. Quality Control (QC) Report for the House of Stake Code of Conduct

Click to Expand: Quality Control Report

The HoS Code of Conduct (CoC) is ready for community ratification with minor edits. It is well-anchored in institutional standards and governance scholarship, procedurally sound (intake → investigation → sanctions → appeal), and operationally inclusive (translations, cultural competence, accessibility). Two minor improvements remain: (1) add a few more hackathon precedents as concrete examples in Reporting/Intake, and (2) formalize a translation cadence to manage volunteer bandwidth.

Objective

This QC evaluates the HoS CoC for rigor, consistency, and ratification readiness, ensuring:

Review dimensions & findings

2.1 Anchoring & Evidence

  • What we checked: Each major section cites at least one institutional and one literature source; NEAR/DAO precedents are used where appropriate.

  • What we found: :white_check_mark: Consistent anchoring to UNESCO, 2023 for due process, transparency, appeal, and cultural competence; to Schneider, 2023 and Tan et al., 2024 for governance design; to Santa Clara Principles, 2018 and RDR, 2022 for transparency reporting benchmarks.

  • Gaps (minor): :warning: Hackathon references (Hack Humanity personal communications) appear, but more explicit examples could be added in the Reporting/Intake section to illustrate on-site escalation vs remote reporting patterns.

Procedural Justice

  • What we checked: Presence and clarity of intake, triage, moderation standards, sanctions ladder, and appeals.

  • What we found: :white_check_mark: acknowledgment, categorization, and urgency assessment; impartial investigations with COI recusals; a graduated sanctions ladder; time-boxed appeals to an independent panel—consistent with Django, 2023, Contributor Covenant, 2023, and UNESCO, 2023.

  • Risk & mitigation: The 14-day resolution target may be ambitious; the draft already allows documented extensions—retain and spotlight this clause.

Inclusivity & Accessibility

  • What we checked: Multilingual access, cultural/linguistic competence, and disability accommodations.

  • What we found: :white_check_mark: Requirements align with UNESCO, 2023 and OECD, 2014; NEAR’s multilingual practice is acknowledged (NDC, 2023).

  • Risk & mitigation: Volunteer translation bandwidth could delay updates → prioritize top languages and set a quarterly translation cycle.

Transparency & Accountability

  • What we checked: Conflict-of-interest (COI) disclosures, vote rationales, transparency reporting.

  • What we found: :white_check_mark: COI rules cover delegates and committees (aligned with Scroll, 2023); annual transparency reports (cases, outcomes, timelines) align with Brookings, 2022 and content-moderation norms from Santa Clara Principles, 2018; indicator thinking can draw from RDR, 2022.

  • Note: Publish an anonymization protocol to mitigate re-identification risk.

AI Ethics & Governance

  • What we checked: Explainability, auditability, human oversight, and appeal in AI-assisted moderation.

  • What we found: :white_check_mark: Human-in-the-loop, model-usage disclosure, audit logs, and contestability align with Gabriel, 2020, Transcend, 2023, and CIGI, 2021.

  • Maturity: Staged adoption (human-only → AI-assisted) is appropriate for NEAR’s governance context.

Risks & mitigations (what could go wrong, how we prevent it)

Risk Description Mitigation
Centralization / Elite capture Stake-weighted voting entrenches incumbents. Rotating seats, term limits, mandatory COI disclosures, minority appeals. Sources: Schneider, 2023.
Process opacity Uncertainty about how cases are handled. Publish plain-language playbooks, annual transparency reports, and an appeals explainer. Sources: UNESCO, 2023; Santa Clara, 2018.
Automation bias Over-trust in AI flags or false positives. Human-in-the-loop, audit logs, contestability & redress. Sources: Gabriel, 2020; Transcend, 2023.
Cultural misalignment Rules misfit diverse NEAR communities. Multilingual forms, culturally diverse panels, regional exemplars. Sources: UNESCO, 2023.
Privacy / re-identification Case stats could expose individuals. Aggregation, k-anonymity thresholds, differential privacy where possible. Sources: Santa Clara, 2018; RDR, 2022.

Overall assessment & next steps

Assessment: The CoC is academically rigorous, operationally robust, and community-ready. It integrates institutional anchors, DAO/open-source best practice, and NEAR-specific norms with a forward-looking AI governance posture.

Low-lift final edits before ratification:

  1. Reporting/Intake examples: Add 2–3 short hackathon-style scenarios (on-site escalation, safety incidents, IP disputes) drawing on internal precedents (Hack Humanity, 2025a; 2025b).

  2. Anonymization note: Insert a brief privacy footnote in the Transparency clause on aggregation & thresholds.

Ratification checklist:

  • Post RFC with change-log and window for comments

  • Snapshot-style community vote

  • Publish moderator handbook (intake → triage → investigation)

  • Release sanctions severity matrix and appeals explainer

  • Commit to annual transparency report (metrics aligned to Santa Clara/RDR concepts)

References for Quality Check Report

References

Brookings. (2022). Transparency as the first step to better digital governance. https://www.brookings.edu/articles/transparency-is-the-best-first-step-towards-better-digital-governance/

Centre for International Governance Innovation. (2021). Algorithms and the control of speech: How platform governance is failing under the weight of AI. https://www.cigionline.org/articles/algorithmic-content-moderation-brings-new-opportunities-and-risks/

Contributor Covenant. (2023). Contributor Covenant: A code of conduct for open source projects (Version 3.0). https://www.contributor-covenant.org/version/3/0/code_of_conduct/

Django Software Foundation. (2023). Django community code of conduct: Enforcement manual. https://www.djangoproject.com/conduct/

Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2

Global Fund. (2021). Code of conduct for governance officials. https://www.theglobalfund.org/media/4293/core_codeofethicalconductforgovernanceofficials_policy_en.pdf

NEAR Core. (2021). Community etiquette. NEAR Governance Forum. https://gov.near.org/t/community-etiquette/4417

NEAR Foundation. (2023). Community guidelines. NEAR Governance Forum. https://gov.near.org/t/community-guidelines/5

NDC Transparency Commission. (2023). NDC code of conduct by transparency commission. NEAR Governance Forum. https://gov.near.org/t/ndc-code-of-conduct-by-transparency-commission/36780

OECD. (2014). Recommendation of the Council on Digital Government Strategies. OECD Publishing. OECD Legal Instruments

Optimism Collective. (2023). Code of conduct. Optimism Governance Forum. https://gov.optimism.io/t/code-of-conduct/5751

Ranking Digital Rights. (2022). RDR Corporate Accountability Index. https://rankingdigitalrights.org/its-the-business-model/

Santa Clara Principles. (2018). Santa Clara Principles on Transparency and Accountability in Content Moderation. https://santaclaraprinciples.org/

Schneider, N. (2023). Governable spaces: Democratic design for online communities. University of California Press. https://www.ucpress.edu/book/9780520393950/governable-spaces

Scroll Foundation. (2023). Delegate & voter code of conduct. https://scroll.io/gov-docs/content/delegate-voter-code-of-conduct

Tan, J., Angeris, G., Chitra, T., & Karger, D. (2024). Constitutions of Web3: A comparative study of DAO governance documents. arXiv. https://arxiv.org/pdf/2403.00081

Transcend. (2023). Key principles for ethical AI development. https://transcend.io/blog/ai-ethics

Transparency International. (2021). Our principles. https://www.transparency.org/en/the-organisation/mission-vision-values

UNESCO. (2023). Guidelines for the governance of digital platforms. https://unesdoc.unesco.org/ark:/48223/pf0000387339

1 Like

Continued…

8. Methodology and Limitations

Click to Expand: Methodology and Limitations

Methodology

Scoping & Framework Development. We adopted a dual-anchor framework:

  • Institutional anchors for transparency, accountability, due process, and cultural awareness (Brookings; Transparency International; Global Fund; UNESCO).

  • Scholarly / governance anchors for online communities and DAOs (Schneider’s Governable Spaces; Tan-Angeris-Chitra-Karger on web3 constitutions; OECD digital-government standards; CIGI on platform governance; AI-ethics work by Gabriel and Transcend).

Evidence Gathering. We compiled normative signals from NEAR’s community artifacts (e.g., Community Etiquette) to capture ecosystem values and actionable norms for off-chain venues. We then tied each CoC section to specific evidence and precedents in an internal “Evidence-to-Policy Mapping,” documenting how provisions map to legitimacy, transparency, proportionality, and due process.

Comparative Analysis. We reviewed DAO and community codes (Uniswap, Arbitrum, Optimism, Scroll, ZK Nation) alongside mature open-source exemplars (Contributor Covenant, Django, Creative Commons). The analysis highlighted recurring strengths (civility, transparency) and gaps (moderation standards, sanction ladders, appeals, and multilingual/cultural competence).

Synthesis & Drafting. The HoS draft integrates the framework, comparative insights, and responsible-AI guardrails (alignment, oversight, contestability). Sections map to HoS governance values—alignment, transparency, accountability, efficiency, inclusivity, sustainability, responsiveness—and are backed by citations and precedents. The draft clearly discloses that it has not yet undergone a full, open community consultation.

Co-creation & Ratification Path. We use a smallest-viable-group → broad-feedback cycle with threshold checks and expanding circles of participation. When broad agreement is reached, the policy proceeds to ratification/use and ongoing iteration.

Quality Check & Iteration. A QC pass tested anchoring (institutional + scholarly), procedural justice (investigations, proportionality, appeals), inclusivity/accessibility, and AI-ethics integration. Feedback resulted in clarifications (e.g., removing contentious examples, tightening timelines/SLAs, acknowledging NDC as a past experiment and re-orienting to HoS), and in producing support artifacts (evidence mapping, design rationale, QC report, implementation checklist).


Limitations

Limited live consultation to date. The draft predates a full community ratification process across HoS members and veNEAR stakers; perspectives may be under-captured until the planned feedback rounds complete.

Source accessibility constraints. Some academic and institutional sources were paywalled or intermittently unavailable; where needed, we relied on comparable open materials and documented the mapping.

Evolving legal and platform context. DAO regulation, data-protection standards, and moderation norms continue to change. Periodic review and transparency reporting are required to maintain legitimacy.

Cultural scope and AI context. While we address cultural/jurisdictional diversity and AI-ethics concerns, the evidence base is primarily English-language and may miss under-represented perspectives or fast-moving developments in AI governance.

Evidence provenance for event inputs. Certain hackathon/program artefacts originated as “personal communication.” These informed direction but are less auditable than published sources; they will be replaced with public documentation when available.

@HackHumanity

  • Primary Author: @HumbertoBesso
  • Feedback provided by: Hack Humanity team, House of Stake Core team, @lane, Bianca (NF)

VI) Quick Start — What You Can Do Now

  1. Read the draft Code of Conduct text (v0.1.0) above.
  2. Vote in the thread polls as opened within each sentiment analysis stage.
  3. IF you have changes to propose Comment below
  4. Delegates: be ready to complete the on-chain confirmation (or signed statement) once the poll passes.

Some questions to consider:

  1. Are the outlined procedures and mechanisms appropriate for HoS right now?
  2. Does it provide clear enough specificity to judge whether a particular action is in violation of the CoC?
  3. Is anything you believe to be important missing? What other examples of good vs. bad behaviour should be included?

Call to action:

Help us harden the CoC.
Review a clause, comment, and cast your vote - your contribution determines whether v0.1.0 becomes v1.0 and ships as the Effective version.

Poll:

  • Am happy to ratify, no comments/changes to propose
  • Am happy to ratify, and I have comments/changes to propose
  • I have a strong objection - please describe your objections
0 voters
3 Likes

NEAR House of Stake - Code of Conduct (CoC) Draft for Community Review

  • Next time, I suggest using a proper Web governance ChatGPT prompt instead of the North Korean-style one you actually used to generate this CoC for managing our community funds.

  • The only real substance you added here is an attempt to silence those who fairly criticize you for nepotism and for undermining the Web3 values we had already built before you “dropped by” to design governance for us.

  • This CoC, in its current form, is nothing more than a poorly generated ChatGPT text a veiled attempt to gag dissent.

  • The day will come when people like you, who have no shame publishing such documents, will be replaced by NEAR Al agents.

  • Honestly, I think you would fit right in working at the central censorship bureau of North Korea.

3 Likes

This needs a detailed process for stakeholders to follow including a dedicated point of contact. This is community dividing action. This is a sub-process of dispute resolution as well I believe.

2 Likes

How is this enforced? All adjudication should be done at arm’s length and preferably in a decentralized fashion.

2 Likes

What type of actions can be considered a serious violation?

2 Likes

What type of actions can be considered a serious violation?

That’s a good question. We could elaborate with either some criteria defining different levels of violation, or maybe just some representative examples of different levels so we can all gauge what sorts of things might fall into different levels and be subject to different levels of enforcement.

Which of those do you think would be best? Or both?

I think we should also consider adding a firing squad for criticizing.

Who comprises this group?

1 Like

Who would these participants be? NF at all?

Something like this?

2 Likes

Just quick reference to Section 6.2 – Grounds for Removal

According to Section 6.2 of the CoC draft, members or delegates may be removed for actions that bring the DAO’s reputation into disrepute or undermine the trust placed in them. Such actions include, but are not limited to, corruption, ethical violations, conflicts of interest, or failure to uphold the terms of responsibility.

In light of this, delegates who were selected directly by the Near Foundation (NF) and have traveled to international events such as Cannes at the Foundation’s expense, as well as those now scheduled to travel to Buenos Aires under the same conditions, should be subject to review and removal under Section 6.2(a) and (e).

These actions constitute a clear conflict of interest and a potential form of institutional corruption, as they create dependency and favoritism inconsistent with the principles of decentralized governance and independence expected from delegates.


Definition of HoS Corruption

Corruption is defined as the misuse of entrusted authority or community-owned resources for personal benefit, favoritism, or the advantage of a select group.

In this context, it represents institutional corruption, where certain individuals (“delegates”):

  • Are selected through non-transparent processes managed by the Near Foundation rather than through open community elections;

  • Receive repeated financial privileges — such as fully funded international travel and accommodations — without community consent, transparent justification, or measurable public benefit;

  • Operate without accountability or reporting, effectively undermining trust and the community’s ability to oversee the proper use of funds.

Such conduct violates Section 6.2(a) (corruption, misuse of position) and Section 6.2(e) (ethical violations, conflict of interest), and therefore justifies immediate suspension or removal of the involved delegates to restore integrity and credibility to the governance process.

To whom are Stewards accountable for their responsibilities?

The appointment will be dissolved immediately after the HoS launch.

Also, please include the following document in the CoC: [NEAR] Dispute and Appeals Process - Google Docs

An open invitation to all: we’re gathering the House of Stake community to review and refine v0.1 of our key policy documents now in Co-Creation Cycle 1 (HoS Constitution, Mission/Vision/Values, Code of Conduct).

It’s this coming Monday, October 13th at 21:00 UTC (a bit over 24 hours before the feedback window for Cycle 1 closes).

The workshop is your chance to:

  • Share your thoughts on what’s working (and what’s not)

  • Explore and understand one another’s varied perspectives

  • Collaborate on improvements that take us toward an even better v0.2 of each

Your input will help build policies we can all stand behind — rooted in collaboration, transparency, and shared purpose of our community.

Full details and the Zoom link to join the event are in the calendar invite here:

Calendar Event (updated)

P.S. We appreciate the notice is quite short—if you can’t join this time, please do share your feedback via a comment here on the Forum.

We will be hosting similar workshops in each co-creation cycle and will rotate around time zones for maximum inclusivity across the global community.

1 Like

Hi @dancunningham , the calendar link is not working - i.e. doesn’t include the Zoom url. Do you mind reposting it, or providing the Zoom url directly? Tks!

Just saw the post on the TG channel with the working link to Zoom invite, thanks!

1 Like