V) Methodology: Research and documentation
We’ve conducted extensive research, reviewing CoCs from other DAOs, open-source projects, and communities. We’ve also mapped these against institutional governance standards and distilled the most effective practices for HoS.
Our work included:
- A best-practices brief that outlines key principles like clarity, inclusivity, transparency, and participation.
- A comparative matrix that evaluated CoCs from communities like Uniswap, Arbitrum, Optimism, and Django.
- An evidence-to-policy mapping that shows how each section of our CoC is based on this research.
We also looked at norms that are already working, such as the NEAR forum rules, the Uniswap “civil forum” ethos, and the clear enforcement guidelines found in the Contributor Covenant.
Methodology: How we synthesized this information
We used a dual-anchor approach to translate our research into enforceable policy.
- Institutional anchors: We referenced guidelines from organizations like UNESCO and the World Bank to ensure our CoC upholds standards of procedural justice, accountability, and transparency.
- Practice/literature anchors: We drew from sources on topics like AI governance to ensure the CoC enables accountable self-governance and can integrate with future tooling.
This methodology helped us translate abstract principles into concrete sections covering everything from definitions and scope to investigations and appeals.
The research is composed of the following elements:
- Scoping statement
- Best Practices & Risks for CoC
- Comparative matrix
- Insights from Comparative Analysis of CoCs
- Evidence‑to‑Policy Mapping Table
- Design rationale
- Quality Check Report & Implementation Checklist
- Methodology and Limitations
If you want to go through all the research backlog unfold the Methodology sections below.
1. Scoping Statement
Click to Expand: Scoping Statement
The House of Stake (HoS) is a new governance framework designed to empower NEAR token holders through a transparent, efficient, stake-weighted decision-making system. It replaces the former NEAR Digital Collective (NDC), which has been retired as a past experiment. HoS operates entirely online, built on the NEAR Protocol, and introduces a vote-escrowed token (veNEAR) that rewards long-term commitment and alignment. Governance is conducted both on-chain—via stake-weighted proposals, screening committees and on-chain voting—and off-chain—in forums, chat channels, virtual meetings and project repositories. Key components include a pre-screening committee, a delegate system with aligned incentives, and structured funding sourced from 0.5 % protocol inflation.
Stakeholders
- Token-holders (veNEAR stakers): Individuals or organisations that lock NEAR into vote-escrow to obtain voting power proportional to their commitment.
- Delegates and screening committee members: Trusted stakeholders who pre-screen proposals and vote on behalf of others, selected based on competence and alignment.
- Contributors and working groups: Developers, designers, researchers and community organisers who build projects, provide services, or propose initiatives under HoS.
- Moderators and stewards: Individuals empowered to facilitate discussions, curate content and enforce community norms across HoS communication channels.
- Wider community: Users of NEAR-based applications, partners, other DAOs and the general public interacting with HoS spaces.
Scope of the Code of Conduct
The Code of Conduct (CoC) applies to all House of Stake governance and community spaces, including but not limited to:
- On-chain governance: submission and discussion of proposals, screening committee deliberations, delegate voting, treasury allocations, and any stake-weighted decision conducted through veNEAR (House of Stake Call for Delegates, 2024).
- Off-chain channels: discussion forums, governance forums, Discord/Telegram/Slack channels, video calls, hackathons, working-group platforms, GitHub/GitLab repositories, and social-media spaces under the stewardship of HoS (NEAR Community Guidelines, 2023).
- Events and interactions: virtual meet-ups, livestreams, educational workshops and any HoS-branded or NEAR-sponsored spaces (Hack the North, n.d.).
- External representation: interactions where participants represent HoS to third parties (e.g., other DAOs, media or regulators) (UNESCO, 2023).
The CoC covers behaviour wherever HoS governance or community interactions occur, regardless of medium, and applies equally to public and private channels. Behaviour outside official spaces may be subject to the CoC if it materially affects the safety or integrity of HoS spaces (Global Fund, 2021; UNESCO, 2023).
Underlying Assumptions
- HoS values alignment between governance decisions and long-term ecosystem sustainability; those with more stake and longer commitment have more influence.
- Governance must be transparent and accountable: all proposals, votes and delegate activities are recorded on-chain and open to review.
- Decision-making should be efficient without sacrificing decentralisation: screening committees and delegate systems reduce noise while preserving open participation.
- Participation should be inclusive and responsive: channels for proposing, discussing and challenging decisions must be accessible to diverse stakeholders, with emphasis on responsiveness to feedback.
- HoS is subject to the laws of relevant jurisdictions and international human-rights standards; it must respect privacy, data-protection and anti-discrimination laws.
- The community acknowledges power imbalances (e.g., between high veNEAR stakes and newcomers) and seeks to mitigate them through fair processes, conflict-of-interest disclosures and inclusive design.
This scoping statement clarifies who the CoC covers, the spaces it governs and the assumptions underpinning HoS. It also situates the CoC within the broader vision of HoS: a sustainable, decentralised and user-owned governance system that balances economic alignment, open participation and pragmatic execution.
Further sections will translate these assumptions into concrete governance principles and enforcement mechanisms.
Research Framework Inspired by Governable Spaces and Governance Literature
Nathan Schneider’s Governable Spaces proposes designing online communities in ways that actively enable self-governance rather than replicating “implicit feudalism” (Schneider, 2023). While the book remains a conceptual foundation, the House of Stake extends these ideas by introducing stake-weighted voting and long-term alignment mechanisms. The framework also draws on international governance guidelines—particularly UNESCO’s Guidelines for the Governance of Digital Platforms (UNESCO, 2023)—as well as emerging literature on responsible AI and digital ethics (Gabriel, 2020; Centre for International Governance Innovation, 2021; Transcend, 2024).
The framework summarises key dimensions of democratic and stake-based governance relevant to DAOs and digital communities. Each dimension includes a short description and criteria that can be used to evaluate existing Codes of Conduct and to design a new one for the House of Stake. The accompanying rubric (Table 1) converts these dimensions into measurable indicators aligned with HoS values: Alignment, Transparency, Accountability, Efficiency, Inclusivity, Sustainability and Responsiveness.
Governance Dimensions
1. Legitimacy & Consent
A governable space must derive authority from its participants. Decision-making processes should be co-created, documented and subject to collective consent rather than imposed unilaterally. Modular governance tools should make it possible for members to choose the rules that govern them. UNESCO’s guidelines advise that governance processes should be “open and accessible to all stakeholders” and that checks and balances should be institutionalised (UNESCO, 2023).
Key ideas: participatory design; clear documentation of rights and duties; mechanisms for community ratification of rules; informed consent.
2. Modularity, Expressiveness, Portability & Interoperability
Governance systems should be modular, allowing communities to assemble and adapt governance processes to their needs. This flexibility encourages experimentation and diversity while maintaining coherence. The CoC should allow modules (e.g., decision mechanisms, enforcement processes) to be swapped or updated without rewriting the entire document.
Key ideas: modular governance plugins; ability to import proven processes; cross-DAO interoperability; explicit documentation of each module’s scope and authority.
3. Subsidiarity & Local Control
Decision-making should occur at the most local level possible. Participant-centred systems allow communities to handle harm and conflict contextually, rather than through global, one-size-fits-all policies. UNESCO’s guidelines support this approach by emphasising multi-stakeholder participation (UNESCO, 2023).
Key ideas: decentralised decision-making; local autonomy; context-sensitive moderation; avoidance of excessive centralisation or over-automated enforcement.
4. Representation & Inclusivity
Democratic governance must ensure that power is not monopolised by majorities or large token-holders. Mechanisms should amplify under-represented groups and reduce participation gaps. UNESCO highlights the need to empower users, promote cultural diversity and reduce participation gaps (UNESCO, 2023).
Key ideas: proportional or weighted voting mechanisms; anti-bias safeguards; outreach to under-represented communities; clear membership definitions; translation and accessibility support.
5. Accountability & Feedback
Decision-makers and enforcers are answerable to those affected by their decisions. Processes must include oversight, transparency, and regular review (Transparency International, 2021; Global Fund, 2021).
Key ideas: disclosure of enforcement actions; independent oversight; feedback loops for community evaluation of moderators and leaders; conflict-of-interest policies.
6. Transparency & Information Accessibility
Participants must understand how rules are made, how decisions are reached and what data are collected. Transparent processes build trust and support informed participation (Brookings, 2022).
Key ideas: open publication of governance documents; clear explanation of algorithms and enforcement policies; accessible archives of proposals and decisions; privacy notices explaining data use.
7. Enforcement & Proportionality
Enforcement mechanisms should deter unacceptable behaviour without reproducing authoritarian structures. Enforcement should be proportional, context-aware, and explained clearly.
Key ideas: graduated sanctions; human-in-the-loop moderation; restorative justice options; safeguards against enforcement abuse.
8. Appeals & Procedural Justice
A fair system provides avenues to challenge or appeal decisions. UNESCO notes that governance processes should include checks and balances, which implies the ability to review and correct mistakes (UNESCO, 2023).
Key ideas: independent review bodies; transparent timelines; right to a hearing; clear criteria for overturning decisions; anti-retaliation protections.
9. Restorative Options & Power Imbalances
Communities should adopt restorative justice practices to repair harm and rebuild trust, while mitigating structural power imbalances.
Key ideas: restorative or transformative justice pathways; mediation services; conflict-of-interest disclosure; acknowledgement of power differentials; anti-retaliation clauses.
10. Education, Accessibility & Continuous Improvement
Governance is an evolving practice requiring ongoing education and refinement. UNESCO emphasises that platforms should equip participants with tools for informed engagement (UNESCO, 2023).
Key ideas: onboarding guides; training sessions; documentation updates; scheduled reviews; surveys; versioning and changelog.
Table 1 – Framework Rubric for Evaluating Codes of Conduct
| Dimension | Criteria/Indicators | Purpose |
|---|---|---|
| Legitimacy & Consent | Participatory drafting, community ratification, explicit consent | Ensures rules derive authority from participants rather than unilateral imposition |
| Modularity & Flexibility | Composable modules, clear interfaces, ability to modify modules | Encourages experimentation and adaptation to diverse needs |
| Subsidiarity & Local Control | Delegation to smallest unit, local charters, context-sensitive moderation | Prevents over-centralisation and supports local autonomy |
| Representation & Inclusivity | Proportional representation, accessibility support, outreach to marginalised groups | Guards against dominance by large token-holders and promotes diversity |
| Accountability & Feedback | Public reporting, independent oversight, regular review | Provides mechanisms to hold decision-makers answerable |
| Transparency & Accessibility | Publication of documents and decisions, clear data practices, accessible archives | Builds trust and supports informed participation |
| Enforcement & Proportionality | Graduated sanctions, human review of automated decisions, restorative options | Ensures fair, proportional enforcement |
| Appeals & Procedural Justice | Defined timelines, independent review body, anti-retaliation protections | Provides fair avenues to challenge and correct mistakes |
| Restorative Options & Power Imbalances | Mediation, conflict-of-interest policies, anti-abuse clauses | Encourages healing and addresses structural inequalities |
| Education & Continuous Improvement | Onboarding, training, scheduled reviews, changelog/versioning | Supports informed participation and ongoing refinement |
References for Scoping Statement
References
Brookings. (2022). Transparency is the best first step towards better digital governance. https://www.brookings.edu/articles/transparency-is-the-best-first-step-towards-better-digital-governance/
Centre for International Governance Innovation. (2021). Algorithms and the control of speech: How platform governance is failing under the weight of AI. https://www.cigionline.org/articles/algorithmic-content-moderation-brings-new-opportunities-and-risks/
Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://link.springer.com/article/10.1007/s11023-020-09539-2
Global Fund. (2021). Code of Conduct for Governance Officials. https://www.theglobalfund.org/media/4293/core_codeofethicalconductforgovernanceofficials_policy_en.pdf
OECD. (2014). Recommendation of the Council on Digital Government Strategies. OECD Publishing. https://www.oecd.org/gov/digital-government/Recommendation-digital-government-strategies.pdf
Schneider, N. (2023). Governable spaces: Democratic design for online communities. University of California Press. https://www.ucpress.edu/book/9780520393950/governable-spaces
Tan, J., Angeris, G., Chitra, T., & Karger, D. (2024). Constitutions of Web3: A comparative study of DAO governance documents. arXiv. https://arxiv.org/pdf/2403.00081v1
Transcend. (2024). Key principles for ethical AI development. https://transcend.io/blog/ai-ethics
Transparency International. (2021). Our Principles. https://www.transparency.org/en/the-organisation/mission-vision-values
UNESCO. (2023). Guidelines for the governance of digital platforms. https://unesdoc.unesco.org/ark:/48223/pf0000387339
2. Best practices and risks for Codes of Conduct in Digital Communities and DAOs
Click to Expand: Best Practices and Risks
Executive Summary
An effective Code of Conduct for a decentralized governance framework, such as the House of Stake (HoS), must proactively balance core principles of human rights, transparency, and accountability with the technical and social realities of digital governance. As we have observed through a comprehensive literature review, the successful implementation of such a code requires a multi-faceted approach that not only establishes clear rules and processes but also anticipates and mitigates significant risks, particularly those related to automation, bias, and power imbalances. The following analysis presents a synthesized framework of best practices and potential pitfalls to inform the development of a robust and sustainable Code of Conduct for HoS.
Methodology
We employed a research framework derived from Governable Spaces, UNESCO’s guidelines for digital platform governance, and scholarly literature on responsible AI. The scope of our inquiry was limited to peer-reviewed articles, institutional reports, and research from recognized non-governmental organizations (NGOs), with a preference for works published within the last five years. Our review focused on identifying best practices for developing and implementing codes of conduct, as well as common risks and systemic failures. Key sources included the Santa Clara Principles on transparency in content moderation, recommendations from Ranking Digital Rights, UNESCO’s governance guidelines, and academic studies on algorithmic content moderation. The synthesis groups these insights thematically.
Best-Practice Principles
| Principle | Description | Evidence |
|---|---|---|
| Human-rights and due-process orientation | A Code of Conduct should embed international human rights principles, such as freedom of expression and non-discrimination, and ensure procedural fairness in all stages of content moderation and sanctioning. The Santa Clara Principles advocate for integrating human rights and due-process considerations into all moderation processes and for making these processes publicly transparent. Ranking Digital Rights recommends that organizations conduct human-rights due diligence and provide grievance mechanisms that respect user rights. | Santa Clara Principles; Ranking Digital Rights. |
| Clear, understandable rules and scope | Rules should be articulated in plain language and include clear examples of permissible and impermissible content. The scope of the code should be explicitly defined to prevent misuse and arbitrary enforcement. | Santa Clara Principles. |
| Cultural competence and inclusivity | Moderation and governance bodies should possess a deep understanding of the cultural and social context of the communities they serve. Policies and processes should be available in multiple languages, and moderation teams should reflect the diversity of the user base. | Santa Clara Principles. |
| Transparency and accountability | Codes should commit to transparent governance by publishing detailed statistics on content removals, sanctions, and appeals, and by providing clear explanations for all moderation decisions. The Santa Clara Principles require comprehensive reporting on enforcement actions, while Ranking Digital Rights recommends regular transparency reports. | Santa Clara Principles; Ranking Digital Rights. |
| Integrity, proportionality and explainability | Moderation systems, whether human or automated, must operate reliably and without bias. The Santa Clara Principles call for regular assessments of algorithmic systems, data sharing on accuracy, and clear explanations of automated decisions. Sanctions should be proportional to the offense. | Santa Clara Principles. |
| Participatory and amendable governance | Codes of Conduct should be developed and amended through participatory, multi-stakeholder processes. UNESCO’s guidelines emphasize the importance of institutionalized checks and balances and open access for marginalized groups. Research on DAO constitutions recommends that governance documents be digital, accessible, and amendable early in a project’s life cycle. | UNESCO; DAO Research Collective. |
| Accessible appeals and restorative options | Timely and accessible appeal mechanisms are crucial for procedural justice. The Santa Clara Principles underscore the need for understandable notices and appeals, while Ranking Digital Rights notes that effective grievance mechanisms are essential for human-rights compliance. | Santa Clara Principles; Ranking Digital Rights. |
| Education and media literacy | Digital communities should invest in onboarding and continuous education to foster shared norms and promote media and information literacy. UNESCO’s guidelines highlight these programs as a means to empower users and reduce participation gaps. | UNESCO. |
| Privacy and data protection | Codes must ensure the protection of user data during reporting and investigation. Ranking Digital Rights advocates for strong privacy governance, including data minimization, encryption, and transparency about data use. | Ranking Digital Rights. |
| Responsible AI & algorithmic transparency | If AI systems are employed for moderation, the code should commit to transparency, explainability, fairness, and non-discrimination. Ethical AI guidelines emphasize that individuals affected by AI-driven decisions should understand the rationale and that human oversight is essential to mitigate bias. | Transcend. |
| Modularity and subsidiarity | Governance should be modular and respect the principle of subsidiarity, with decisions made at the most local and appropriate level. Governable Spaces argues that modular, context-sensitive governance allows communities to adapt and that subsidiarity delegates authority to those most affected by the decisions. | Schneider; UNESCO. |
Risks and Pitfalls
| Risk | Description | Evidence |
|---|---|---|
| Vague or overbroad rules enabling abuse | Vaguely defined rules can enable arbitrary enforcement and be weaponized by powerful actors within a community. The Santa Clara Principles warn that opaque policies undermine trust and due process. | Santa Clara Principles. |
| Inconsistency and arbitrariness in enforcement | Algorithmic content moderation can lead to inconsistent and arbitrary outcomes for identical content. Research shows that machine learning models can produce conflicting decisions based on random training parameters, thereby undermining procedural justice. | Gómez et al., 2024. |
| Bias and discrimination | Both algorithmic and human moderation can disproportionately affect marginalized groups. Studies have demonstrated disparate impacts across demographics, and the Centre for International Governance Innovation notes that automated moderation, which often relies on keyword filtering, can inadvertently censor marginalized communities while failing to detect more subtle harms. | Gómez et al., 2024; CIGI. |
| Opacity and lack of accountability | Many platforms lack transparency regarding moderation decisions, hindering accountability. Predictive multiplicity in AI models makes it challenging to determine which algorithm produced a decision, complicating appeals. The outsourcing of automated moderation can further obscure accountability. | Gómez et al., 2024; CIGI. |
| Over-reliance on automation | Over-dependence on AI tools leads to rapid but error-prone moderation. Mandated removal deadlines often compel platforms to automate, increasing the risk of unjustified takedowns. The Centre for International Governance Innovation warns that expanded automated moderation could suppress political dissent. | Gómez et al., 2024; CIGI. |
| Weaponization and retaliatory reporting | Without adequate safeguards, reporting systems can be abused to harass opponents or silence dissenting voices. The Santa Clara Principles and UNESCO guidelines caution that broad enforcement powers and insufficient oversight enable abusive or discriminatory enforcement. | UNESCO; Santa Clara Principles. |
| Lack of cultural competence | Moderators unfamiliar with the languages and cultural contexts of their communities may misinterpret speech, leading to unjust censorship. Automated tools often struggle with low-resource languages, disproportionately flagging content from these communities. | CIGI. |
| Chilling effects and self-censorship | Overly restrictive codes and severe penalties may discourage members from engaging in healthy dissent or expressing controversial opinions. This risk is amplified when appeals processes are inaccessible. While not a primary focus of the reviewed literature, this remains a recurring concern in digital governance discourse. | Inferred from general governance literature and UNESCO’s emphasis on human rights. |
| Absence of appeals and restorative mechanisms | The lack of accessible appeals and restorative options can lead to unjust penalties and perpetuate exclusion. The Santa Clara Principles emphasize the necessity of meaningful appeal processes to uphold procedural justice. | Santa Clara Principles. |
| Power imbalances and centralization | Centralized enforcement structures without sufficient checks and balances can concentrate power in the hands of a few moderators or leaders. UNESCO’s guidelines warn that governance frameworks require institutionalized checks and diverse expertise to prevent abuse of power. | UNESCO. |
| Data privacy risks | Reporting systems frequently collect sensitive information. Without robust data protection measures, such as encryption and data minimization, the system may expose the personal information of users, thereby putting them at risk. Ranking Digital Rights advocates for strong privacy governance and user control over data. | Ranking Digital Rights. |
| Rigid, non-modular policies | Static or overly prescriptive codes are unable to adapt to evolving community norms. Governable Spaces argues that modular, flexible governance better accommodates changing needs and fosters legitimacy, whereas inflexible codes risk becoming obsolete and losing community support. | Schneider; DAO Research Collective. |
Conclusion
The findings from this review underscore that effective Codes of Conduct for DAOs and digital communities must be designed with human rights, clarity, inclusivity, transparency, and ethical AI at their core. Best practices, as highlighted by the Santa Clara Principles and Ranking Digital Rights, prioritize human-rights due diligence, clear rules, cultural competence, and accessible appeals. Foundational governance principles from UNESCO and Governable Spaces stress multi-stakeholder participation, subsidiarity, and modularity. In contrast, emerging ethical-AI guidelines from Gabriel (2020) and Transcend (2024) highlight the critical need for transparency and explainability in automated systems. Furthermore, academic research on algorithmic content moderation, such as that by Gómez et al. (2024) and the Centre for International Governance Innovation (2021), warns of the inherent risks of bias, arbitrary enforcement, and opacity. These insights should inform the House of Stake’s Code of Conduct, ensuring it is a document that is not only aligned with its principles but also transparent, accountable, efficient, inclusive, and responsive.