Alan here!
The following text is a summary of different ideas we have explored at Meta Pool on Web3 Governance × AI. It builds on several internal discussions and the development of a prototype for House of Stake (HoS) that uses AI to analyze proposals.
Our goal has been to understand how AI can enhance decision-making and open new research avenues for decentralized governance models boosted by AI.
TL;DR
-
AI offers transformative tools for Web3 governance but brings risks of bias, over-reliance, and accountability.
-
Meta Pool and NEAR’s House of Stake are pioneering AI-driven governance experiments.
-
Two main paths emerge: AI co-pilots that assist humans, and autonomous agents that act independently.
-
The rise of AGI in governance raises urgent questions on alignment, oversight, and legitimacy.
Web3 governance is an evolving domain that seeks to distribute value and decision-making power across contributors in novel ways. Artificial Intelligence (AI) has the potential to significantly impact governance, not only in Web3 but across society at large. Early experiments show that AI can help address some chronic issues in decentralized governance, like low voter turnout and coordination fatigue, by providing tools for faster decision-making and broader participation.
For example, since early 2024 many Decentralized Autonomous Organizations (DAOs) have started using AI assistants to summarize lengthy forum debates and even manage certain on-chain operations.
This indicates AI could play a transformative role in how decentralized communities govern themselves.
In the NEAR Protocol ecosystem, a new governance model called House of Stake (HoS) was proposed by the Gauntlet team in 2024. HoS is envisioned as a more robust, community-driven governance framework for NEAR, supported by the NEAR Foundation and meant to eventually operate independently.
Notably, HoS plans to leverage AI tools as part of its governance suite – an aspect we will explore in this post.
Meta Pool DAO, which has over four years of experience in creating distributed and decentralized governance on NEAR, provides a valuable case study in this context.
We will discuss two key approaches to integrating AI in Web3 governance that are more about our current process: AI co-pilots vs. AI autonomous agents, and consider the near-future potential of AGI (Artificial General Intelligence) in this field.
AI Governance Co-Pilot
AI governance co-pilots are AI systems designed to assist human participants in governance. Rather than making decisions on their own, co-pilots augment human decision-making by providing distilled information, analysis, or recommendations. In practice, this could mean AI summarizing proposal discussions, highlighting key arguments for and against a proposal, or even suggesting questions that voters should consider.
The Aave protocol’s community has been using an AI assistant (x23) that can summarize complex governance forum threads and flag important points. This helps token holders grasp the essentials of debates without reading through dozens of posts, potentially increasing informed participation.
Meta Pool’s recent Minimum Viable Product (MVP) governance agent falls into this co-pilot category. It was developed as a tool to “enhance human knowledge” during the governance process by providing a concise overview of proposals and their implications. When Lane Rettig presented this at EthCC, it demonstrated how an AI agent could serve as a delegate’s assistant, essentially a governance co-pilot, within the NEAR House of Stake framework.
This AI-AGENT analyzes on-chain data from governance smart contracts to offer advice and guidance on proposals, including whether they should be voted for. It employs a Large Language Model (LLM) to analyze the on-chain data.
The MVP is currently under development and awaits integration into the final HoS platform.
While AI co-pilots show promise, one must address their limitations and the behavioral responses they induce. A known concern is that users might over-rely on AI summaries and stop doing their own critical analysis.
Studies of human-AI interaction have noted that if given an “easy button,” people can become cognitively lazy, trusting AI outputs without verification. In governance, this could mean voters blindly follow a co-pilot’s recommendation without understanding the proposal, potentially dangerous if the AI is biased or flawed.
Copilots of this nature will require guardrails or a standardized audit process to ensure impartiality and community benefit. Additionally, they should facilitate verification through third-party information or the user’s own analysis and knowledge.
AI as Autonomous Agents
Moving beyond assistance, one can ask: What if we give an AI agent actual decision-making power in governance? An AI autonomous agent in this context would mean an AI that can vote on proposals or even propose and implement actions, following a set of goals defined by the community.
This concept essentially treats the AI as a delegate or even a DAO member in its own right. It’s a provocative idea and some early experiments hint at what this could look like. For example, Aragon’s research has discussed scenarios where token holders might delegate their voting power to AI agents that vote on their behalf according to predefined rules or strategies.
The idea is that busy humans could rely on always-available, data-driven AIs to keep governance processes moving (ensuring quorum, timely voting, etc.), which might make DAOs more efficient.
However, entrusting governance to autonomous AI raises many challenges and open questions. First and foremost is the issue of trust and accountability: How do we trust that an AI will make decisions in the best interest of the community? If the AI’s decisions lead to bad outcomes, who is responsible
In human governance, delegates can be voted out or held accountable for poor performance; with an AI agent, the lines of accountability blur. One approach to building trust is to maintain human oversight – e.g., the community could set narrow parameters within which the AI can operate, or there could be an automatic review process for AI-made decisions.
Sandboxing the AI’s authority would be recommended, maybe letting it vote on low-stakes proposals first, or requiring that any AI-initiated action be ratified by humans in a secondary vote.
Another challenge is bias and data quality. An AI agent’s choices are only as good as the data and objectives it is given.
If the training data has hidden biases or if adversaries feed the AI misleading information, the agent could systematically favor certain outcomes. For example, always favoring proposals by a certain group or neglecting minority voices.
Indeed, bias in AI models is a well-documented issue in AI research, and in governance the stakes are high because biased decisions could erode community trust quickly.
Furthermore, there’s the question of responsibility: if an AI agent misbehaves or causes harm (e.g., votes to fund a fraudulent project), how can the community correct course? The legal and ethical frameworks for AI decision-makers are underdeveloped.
In summary, while an AI autonomous agent could increase efficiency and perhaps make governance decisions at machine speed, it requires solving or mitigating these trust, bias, and accountability issues.
An open question is how to mathematically prove or formally verify that an AI agent’s decision policy aligns with a DAO’s charter or values.
The Outlook: AGI for Web3 Governance
Looking further ahead, one can imagine the role of Artificial General Intelligence (AGI) in Web3 governance. AGI refers to AI systems that possess human-level (or beyond) cognitive abilities and can generalize across many tasks. If an AGI were to participate in a DAO’s governance, either as a supercharged co-pilot or an autonomous delegate, it could drastically change the paradigm of how decisions are made.
Science fiction scenarios aside, researchers and thought leaders have started contemplating this.
For instance, Trent McConaghy (Ocean Protocol) speculated about DAOs eventually being managed by advanced AI agents that token holders cannot turn off, creating an “unstoppable” organization.
In his thought experiment, if all token holders of a DAO delegated to AI, you’d have a treasury managed entirely by algorithms. A critical concern with AGI in governance is the loss of the human element. Governance in any community isn’t purely about logic or optimization; it’s also about values, empathy, and context.
An AGI might make decisions that are efficient or logically sound but not aligned with human values or social norms.
- How do we ensure a powerful AI’s goals remain aligned with the community’s goals over time?
Another point to address is identity and legitimacy. If AGI agents become so advanced that they can pass for human participants in a DAO, it undermines the premise of human-centric
Future Work
AI offers compelling opportunities to enhance Web3 governance, from co-pilots that alleviate the burden on human participants to the provocative idea of fully autonomous governance agents.
The Meta Pool and NEAR House of Stake experience, even on this basic experiments, exemplifies how these ideas are being tested in practice, providing early evidence of both the benefits (e.g., improved participation and efficiency) and the pitfalls (e.g., over-reliance and trust issues) of integrating AI into governance.
To truly adopt AI in governance scientifically and responsibly, further research and experimentation are needed.
Open questions include:
-
How effective are AI co-pilots at improving decision quality over the long term?
-
Can we design autonomous agents whose objectives are transparently aligned with a community’s values?
-
How do we safeguard against the risks of advanced AI, ensuring that human agency and fairness remain at the core of governance?
Web3 governance itself is an ongoing social experiment. Introducing AI into the mix should be done incrementally, with empirical evaluation at each step.
As we gather more data, for example, from NEAR’s HoS AI tools or other DAO’s AI integrations, we will be better positioned to refine these systems.
I’d stop here, but I would love hearing diverse perspectives and having broader chats about AI.
At Meta Pool we are all in on helping with AI governance and we are super keen to support teamwork.