House of Stake - Mission, Vision & Values (MVV) v0.1.4
VISION:
Decentralised governance for the user-owned Internet and humanity-enhancing AI.
MISSION:
To establish an evolving governance system,
that is incorruptible, uncapturable and sovereign by default,
co-created, co-governed and co-operated
by an AI-augmented NEAR stakeholder community.
VALUES:
Credible Neutrality
Experimentation with Safety
Builder and Business Centric
Autonomy with Accountability
Adaptive Governance
Meaningful Participation
Transparency with Dignity
AI-Augmented, Human-Governed
Public Goods as Growth Engines
Cultural Stickiness
Principles and behavioural tests behind the values:
1. Credible Neutrality
Principle: Governance must be built by, with and for the community, augmented by community-aligned AI that enhances transparency, intelligence and fairness, ensuring freedom from control and capture by individuals, institutions, or closed groups.
Behavioural Test: Does this action avoid risks of concentrating power, e.g. protecting against a few top stakeholders gaining overbearing control over the rest of the community?
2. Experimentation with Safety
Principle: Governance models, funding mechanisms, and AI agents and tools are tested, via rapid prototyping and iteration, in lower-stakes environments before being merged into the main system.
*Behavioral Test: What’s the worst that could happen if an experiment we are trying fails? Can it do so without risk of endangering the overall ecosystem’s health and integrity?
3. Builder and Business Centric
Principle: Governance must create the conditions for both individual developers and institutions to thrive — from the developer experience to enterprise-scale adoption. This includes funding the infrastructure, tools, and programs that make NEAR the most attractive platform for adoption that scales.
Behavioural Test: Does this decision improve NEAR as a place where developers, entrepreneurs, and enterprises can build great products and lasting businesses?*
4. Autonomy with Accountability
Principle: Workstreams and contributors have freedom to innovate, balanced with clear success gates and measurable outcomes. Community-governed mechanisms should be in place for setting and regularly reviewing these objectives, in a fair and transparent way, keeping human and AI activity oriented towards our mission.
Behavioural Test: Does this program have both the freedom to act and clear metrics to evaluate success?
5. Adaptive Governance
Principle: Governance should evolve iteratively, guided by feedback loops and data-driven continuous learning systems that sense and respond to changing ecosystem needs and emerging opportunities.
Behavioural Test: Is there a mechanism to review and adapt this process if it no longer serves the ecosystem?
6. Inclusive & Meaningful Participation
Principle: All stakeholders large and small must have meaningful ways to engage in governance. Decision-making influence may be proportional to stake, and our governance system must also provide opportunities for all community members to contribute, keeping people engaged and invested.
Behavioural Test: Are we creating real roles for all interested community members to contribute value, even if they don’t have significant stake-weighted voting power?*
7. Transparency with Dignity
Principle: Decisions, funding, and performance are open and legible, while respecting privacy and personal boundaries.
Behavioural Test: Can this be shared with the community to enhance collective intelligence, without compromising anyone’s right to privacy?
8. AI-Augmented, Human-Governed
Principle: We embrace AI as a tool for fair, representative, efficient, and adaptive governance at scale. AI agents can be core participants in our governance processes. We build such agents in a decentralised, open-source and permissionless way, requiring that they operate transparently and in adherence with all of our values, so they can act as neutral, community-aligned governance participants.
Behavioural Test: Does this use of AI improve fairness, participation, efficiency or collective intelligence, while reinforcing our values and providing sufficient transparency and oversight for humans in the loop?
9. Public Goods as Growth Engines
Principle: We invest in shared infrastructure, tools, and governance systems, building out a data-driven governance layer for the use of humans an AI, as a powerful enabler of compounding network effects.
Behavioural Test: Will this investment increase the resilience, long-term potential and growth of the ecosystem beyond one project or cycle?
10. Cultural Stickiness
Principle: The DAO cultivates rituals, norms, and shared ownership that build loyalty across diverse participants.
Behavioural Test: Does this initiative make contributors more likely to identify with NEAR and remain engaged long-term?