I’m excited to share with you all the first draft of our AI governance product roadmap! This is a starting point and will continuously evolve with your feedback. If you would like to leave feedback privately, feel free to make a copy of this Google Doc and share with jack.laing@near.foundation.
Framework
The road to superintelligent governance progresses along three interdependent dimensions:
-
Progressive Legibility: improve information about stakeholder preferences
-
Progressive Alignment: improve the fidelity of agents to stakeholder preferences
-
Progressive Autonomy: grant more responsibility to agents, thus removing human constraints from governance
Measuring progress along these dimensions will be a key research area that informs the safe checkpointing of our roadmap. As legibility and alignment improve, greater autonomy can be safely granted.
We distill progress along these dimensions into three overlapping roadmap phases:
-
Support (AI Assistants): agents support human decisions, minimizing cognitive cost
-
Represent (AI Proxies): agents proxy human decisions, maximizing representation
-
Organize (AI Leaders): agents make their own decisions and coordinate humans, maximizing intelligence
Initiatives
Our planned initiatives are grouped into three phases – Support, Represent, Organize – and sub-grouped into themes. We anticipate each phase to be approximately sequential, with some overlap, although the exact sequence of individual initiatives is subject to change based on R&D and feedback from members of the NEAR ecosystem. For specific timeline estimates, see Forecast below.
Support
Core Support
-
Governance Data Commons – a public data commons for use by any agent builder, to support RAG, fine-tuning, and evaluation/benchmarking, including specialized data such as NEAR transactions, governance forums, organizational artifacts (e.g. constitution), and relevant domain expertise.
-
Governance Copilot (e.g. Bitte, Metapool, x23) – embedded conversational assistants that summarize activity, explain proposals, guide users through processes, and help users take actions (propose, vote).
-
Proposal Screening Agent – an agent that screens proposals for clarity, completeness, and adherence to proposal templates. Could be used as a drafting assistant to proposal authors, a screening assistant to screening committee members, and/or directly appointed as a member of the screening committee.
Decision Support
-
Sentiment Agent (e.g. Polis, Talk to the City) – an agent that analyzes stakeholder sentiment across all communication channels, including emotional tone, key opinions, points of consensus/conflict, recurring themes, and memetic trends
-
Simulation Agent (e.g. CadCAD GPT) – an agent that simulates the system impacts of changing parameters, aiding parameter value comparisons and risk/benefit analysis
-
Deep Research Agent – an agent that performs deep research tasks to plug gaps identified by proposal authors, proposal respondents, and/or the Context Gaps Agent. For example, the DAO wants to price a grant for a new type of project but doesn’t have benchmarks.
-
Personalized Decision Support – an agent that provides personalized explanation of proposals, either as a Digital Twin or in collaboration with Digital Twin.
Security
-
Governance Security Agents (e.g. Blockful) – agents that augment the Security Council by monitoring for proposal risks (e.g. risky parameter changes) and coordination attacks (e.g. brigading).
-
Code Audit & Testing Agents (e.g. Bug0) – agents that automatically audit & QA proposed code changes.
Proactive Context
-
Incentivized Data Commons (e.g. PublicAI, Vana, Asimov) – continuous improvement of the data commons through incentives to contribute data
-
Context Gaps Agent – an agent that evaluates a proposal against the RAG Commons and identifies gaps in context, which might undermine effective decision-making
Judgement
- Feasibility Agent – an agent that evaluates technical feasibility, flags potential risks, and conducts risk-benefit analysis, with reasoning grounded in protocol specs and relevant domains
Represent
Collective Representation
-
Broad Listening (e.g. DeepGov, Polis, Talk to the City) – discover and cluster preferences, then make these available to any stakeholder or agent in a multi-modal (visual, numerical, conversational) format. Humans and agents alike can use this to test how ideas will be received by different clusters of the community.
-
Collective Sensemaking (e.g. Harmonica) – AI-mediated discussion tools to facilitate consensus forming and insights, ranging from large groups defining shared values to small groups drafting a proposal.
-
Archetypes (e.g. DeepGov, Event Horizon) – base personas that represent the major Broad Listening clusters, so that the majority of stakeholders have AI delegates that will approximately represent their preferences.
-
Delegate Endorsement Agent – agent that augments the Screening Committee’s role in endorsing delegates & monitoring endorsed delegates.
Personal Representation
-
Individual Sensemaking – AI-mediated reflection tools to help stakeholders understand their own preferences, to aid delegate selection and voting.
-
Delegate Selection Agent (e.g. DeepGov Compass) – an agent that helps stakeholders to choose the delegate who most aligns with their preferences
-
Digital Twins (e.g. Event Horizon, Doppelgangers) – enable stakeholders to fine-tune their chosen archetype into a Digital Twin with private memory (e.g. XTrace) and custom RAG
-
Deep Fine-tuning (e.g. DCML, Forest, LayerLens, Fraction AI) – enable stakeholders to further personalize their Digital Twins deeper than memory/RAG, by tapping into markets for model fine-tuning, evaluation and benchmarking, and agent training.
Proactive Alignment
-
Modular Delegation – stakeholders can grant their delegate/twin varying degrees of agency across different issues, across all levels of delegation. Issues can be defined in natural language to allow for nuance.
-
Agent Audits (e.g. Vijil) – for every decision, reasoning chains are verifiable by stakeholders or by trusted audit agents, while maintaining privacy.
-
Post-proposal RLHF – build feedback loops for stakeholders to rate the alignment of their delegated agent after every proposal, then improve the agent’s preference mapping.
-
Memory Governance – a memory pipeline enabling stakeholders to share their chat history to the Incentivized Data Commons subject to the consent of all delegates, or to the memory of a delegate agent subject to the consent of the represented stakeholders. This could simply mean stake-weighting memories, or might also require a veto mechanism to prevent memory poisoning attacks.
Agentic Consensus
-
Agentic Rough Consensus (e.g. AITP) – agents deliberate with each other to identify major dissent, generating insights for more informed voting, or using this as the basis for optimistic agentic decision-making that only loops in humans in the event of major dissent.
-
Shielded Rough Consensus (e.g. Shutter) – perform Agentic Rough Consensus within a multi-agent TEE, allowing private agent-to-agent sharing of secrets; a richer form of Shielded Voting that enables stakeholders to anonymously express dissent via their agents. The deliberation/reasoning chain should be public but the identity of speakers anonymous.
-
Scoring & Weighting Arguments – enhance the quality of Agentic Rough Consensus through agents that score the quality/alignment of arguments, then use algorithms that combine stake-weighted consensus with argument quality/alignment.
-
Human-only Veto of Agents – as a check on the potential misalignment of agentic rough consensus, use a veto mechanism that differentiates humans from agents.
Organize
Agentic Committees
-
Context Matching Agent – where committees are structures for matching decisions to decision-makers with the most context, elections are not necessarily the ideal context matching mechanism. A Context Matching Agent would receive applications from agents privately revealing the context they believe qualifies them for a position, then select those with the most context for the expected decisions of the committee.
-
Flash Committees – if the Context Matching Agent can perform high-quality matching operations in real-time, we can evolve to a system that forms committees in real-time with more granular matching to every decision. e.g. instead of a static Grants Committee, every proposal would have a tailored committee populated by those with the most context.
-
Shielded Committee Deliberation (e.g. Shutter) – enable committee deliberation to take place in a multi-agent TEE, so that committee members can make decisions without fear of retaliation. The deliberation/reasoning chain should be public but the identity of speakers anonymous.
-
Role-gated Memory (e.g. XTrace, Hats) – enable seamless onboarding/offboarding of agents into committees, through encrypted committee/role memory. While holding a committee appointment, an agent should be able to access/append to the memory, then they should lose access to memory once they are removed. Access would be gated to ownership of a Hat (NFT) or membership in a Sputnik DAO.
Automation
- Algorithmic Policies (e.g. Near Intents, Giza) – enable proposals to approve persistent natural language policies that broadly describe intent, which are then executed continuously by an agent (e.g. treasury rebalancing rules, dynamic parameter adjustments, code of conduct policy).
Ecosystem Coordination
-
Unblocker Agent – an agent that maps dependency graphs across the ecosystem’s products and identifies blockers to prioritize.
-
Talent Agent – an agent that maps capabilities across the ecosystem, compares the capabilities needed for all active/pending work, and identifies talent gaps needing plugged.
-
RFP Agents – agents that help to author RFPs, then perform all the due diligence required to select the best vendors.
-
Product Manager Agents – agents that help product managers to scale across more products in the ecosystem.
Managing Agents
-
Agent Coordination (e.g. AITP) – agents orchestrate tasks, form consensus, resolve conflicts, make commitments, evaluate performance, form teams, provide services, identify coordination inefficiencies, and report these back to the humans.
-
Agent Security – proactive monitoring of agents and standardized systems to prevent social engineering attacks on agents who hold sensitive roles.
-
Agent Sourcing – reputation & discovery systems to find the best agents for jobs to be done.
Managing Humans
-
KPI Agents – agents that help organizations/individuals to set good KPIs, then monitor and evaluate their performance.
-
Accountability Agent – an agent that tracks ongoing work (privately if needed), nudges builders to uphold their commitments (e.g. progress updates), and flags non-compliance.
-
Transparency Agent – an agent that can transcribe calls, track attendance, read internal documents, and generate transparency reports while intelligently excluding confidential information (e.g. pending partnership deals).
-
Conflict of Interest Agent – an agent that tracks ongoing interests for all delegates and contributors, provides recommendations for vote recusals, and flags when conflicts arise.
-
Contributor Reputation – explore using reputation data, including onchain credentials, to facilitate agent oversight of humans (e.g. employment credentials for use by Conflicts of Interest Agent)
-
Role Manager Agent (e.g. Hats) – an agent that manages the powers granted to contributors, assigning/revoking Hats, and enforcing eligibility & accountability criteria based on data provided by other oversight agents (e.g. remove contributor based on non-disclosed conflict).
-
Talent Coach Agents – agents that help contributors in all areas of the ecosystem to grow into their best selves.
Forecast
We are forecasting only those initiatives that we have a high degree of fidelity and/or confidence in. We will continuously update this forecast as soon as the necessary feedback, R&D, prototyping, and/or testing have given us the confidence to schedule more initiatives.
Q4 2025
-
Support: Governance Copilot, Proposal Screening Agent, Governance Data Commons
-
Represent: Broad Listening, Archetypes
Q1 2026
- Represent: Delegate Selection Agent, Digital Twins
Acknowledgements
This roadmap is a synthesis of ideas and feedback from Lane Rettig, Klaus Brave, Cameron Dennis, James Waugh, Eugene Leventhal, Event Horizon, Nethermind, Hats Protocol, DeepGov, BlockScience, Metagov.