Blog
Participation and Power in AI Governance: Five Takeaways from PAIRS 2026
Posted on 9th of March 2026 by Adam Zable
The second edition of the Participatory AI Research and Practice Symposium (PAIRS) brought together researchers, policymakers, technologists, and civil society groups in New Delhi and online to discuss how the public can take part in shaping AI systems.
Across nearly 60 presentations and 10 hours of online sessions, speakers shared research and real-world examples of participatory AI projects from around the world. While the topics varied, five clear themes emerged from the conversations The GovLab followed:
Participation lacks pathways to power;
Participation must move from civic input to infrastructure;
Legitimacy depends on procedural justice;
Inclusion alone does not guarantee good outcomes for low-resourced language AI; and
Resistance is a form of civic participation.
Below we explain each of these themes and highlight some of the presentations that shaped them.
1. Participation Lacks Pathways to Power
One message came up again and again during PAIRS 2026: participation is widely discussed in AI governance, but it rarely influences the decisions that matter most.
There are many citizen assemblies, community forums, co-design workshops, and other participatory experiments. But few of these efforts are connected to institutions that actually shape AI systems. Most lack the authority to influence agenda-setting, standards, procurement decisions, or deployment rules.
Omer Bilgin of the University of Oxford described this as a “broken pipeline.” Promising pilots rarely lead to more than fragmented participatory initiatives. Participation tends to affect later stages of AI development, such as reviewing or refining systems, rather than earlier decisions like defining the problem or choosing the data used to build models. In practice, many projects end up validating work that has already been done.
Pierre Noro suggested that this pattern is partly shaped by political and economic pressures. Governments increasingly treat AI as a strategic asset tied to economic growth and geopolitical competition. In this context, public engagement often becomes secondary to rapid development and deployment. The scope of debate narrows to implementation details, while foundational questions about whether to build and under what conditions fade from view.
Timing also matters. Stephanie Camarena from Source Transitions noted that participation has the most influence before key decisions are made. Once policies, procurement plans, or technical standards are in place, it becomes much harder to change course.
Her work on the AuDIITA model (the Australian branch of the IEEE Dignity, Inclusion, Identity, Trust and Agency initiative) illustrates this point. The model brings community workshops into the early stages of AI design and connects the results directly to policymakers and standards bodies. This link to formal decision-making processes helps ensure that community concerns do not remain just discussion points but feed into governance processes.
Another challenge is translation. As Denisa Reshef Kera from Bar-Ilan University pointed out, community discussions often produce values, concerns, and contextual insights. Institutions, however, usually require clearly defined rules, responsibilities, and enforcement mechanisms. This mismatch can make it difficult for participatory outputs to translate into policies or regulations.
2. Participation Must Go from Civic Input to Infrastructure
If participation routinely fails to influence decisions, what would make it work better?
Many speakers at PAIRS argued that participation cannot be treated as a one-time event like a workshop, consultation, or public meeting. Instead, it needs to be built into the systems and institutions that develop and govern AI.
In other words, participation should not only gather opinions. It should shape how AI systems are designed, evaluated, and managed over time.
One example is the Community-in-the-Loop (CITL) framework, presented by Ye Ha Kim and Oscar Yeung from UCL. CITL treats participation as an ongoing governance structure as opposed to a single consultation. It proposes three ways to embed communities in AI systems:
Participatory design processes that involve communities when defining problems and building models;
Community-led data stewardship, such as data trusts, cooperatives, or data commons; and
Public auditing tools that make AI systems easier for communities to understand and evaluate.
These mechanisms allow communities to influence AI projects throughout their lifecycle, from design to deployment to ongoing evaluation.
Rashid Mushkani from the University of Montreal/Mila introduced a concept called a Participation Ledger. One challenge in participatory AI is that ideas gathered during workshops or consultations often disappear as systems evolve. Models are updated and retrained, and vendors change.
The participation ledger aims to solve this by recording how community input affects the system. It links contributions—such as prompts, annotations, or incident reports—to specific system changes. If later versions of the model ignore these commitments, the change becomes visible. In this way, participation becomes part of the system’s history rather than something that only exists in meeting notes.
Other work focused on embedding participation in evaluation and oversight.
Abhishikta Mallick and Avanti Durani from the proposed a participatory auditing framework for AI tools used in legal services. Their model introduces checkpoints throughout the lifecycle of an AI system where experts and community stakeholders can review the system.
Similarly, Matthew Kennedy of the Oxford Internet Institute suggested using public red-teaming exercises as a form of participatory governance. In this approach, public institutions, developers, and independent testers work together to stress-test AI systems. This can help identify harms or risks that internal testing may miss.
Across these examples, a common message emerged: participation is most effective when it is tied to the places where decisions are made.
The GovLab has explored similar ideas in its own work. One example is AI Localism, which argues that participation is most meaningful when decisions are made closer to the communities affected by AI systems. Local and regional institutions often have clearer authority over deployment decisions, making it easier for community input to influence outcomes.
3. Legitimacy Depends on Procedural Justice
Even if participation is built into AI systems and institutions, another question remains: why should people trust these systems in the first place? What makes AI development, use, and governance legitimate in the eyes of the communities it affects?
The Community-in-the-Loop framework addresses this question through the idea of a social contract. Drawing on political theory, the framework treats participation as a shared agreement between communities, developers, and institutions about how AI systems should operate. Under this model, communities are not just consulted. They help define the rules that guide how AI systems are designed, used, and evaluated.
These agreements focus on three things:
Procedural fairness: who gets to participate and how decisions are made
Substantive fairness: whether outcomes align with shared values like justice and equity
Recognition: whether affected communities are acknowledged as legitimate stakeholders
These commitments can then shape design decisions, data governance practices, and auditing processes throughout the lifecycle of an AI system.
A related idea is the concept of a social license. While both approaches aim to address power imbalances, they operate differently. The idea of social license comes from industries such as mining and energy, where (at least in theory) companies must earn ongoing acceptance from the communities affected by their activities. In AI governance, The GovLab has adapted this idea to focus on data reuse. In this framework, communities help define the conditions under which data about them can be collected, shared, and (re)used. The process follows three steps:
Communities express their expectations and concerns in structured participatory processes.
These expectations are documented in enforceable ways, such as through data sharing agreements.
The conditions are built into oversight and evaluation mechanisms to ensure compliance over time.
Where the CITL framework asks why AI governance should be legitimate, the social license approach focuses on how communities can enforce their expectations in practice.
Research presented at PAIRS also showed how these ideas play out in real life.
Aleks Berditchevskaia from Nesta presented the AI Social Readiness Assessment, which examined how people in the United Kingdom view AI in public services. The study combined national polling with small group discussions about specific AI tools.
The results showed low baseline trust. Only 23% of respondents said they trusted public institutions to use AI responsibly. However, attitudes were more nuanced when people looked at concrete use cases. Many participants recognized potential benefits and sometimes felt that those benefits could outweigh risks.
Still, concerns remained. Participants raised issues about environmental impacts, manipulation, overreliance on automated systems, unequal outcomes, and institutional accountability, even after the researchers explained safeguards such as privacy protections and bias mitigation.
What participants emphasized instead were the need for human oversight, transparent records of how systems operate, long-term monitoring, meaningful consent in sensitive situations, and public engagement before deployment decisions are finalized. When asked to weigh cost savings against public support, respondents prioritized public support by a substantial margin.
The implication is that technical improvements alone do not create legitimacy. Even well-designed participatory systems must be experienced as fair, transparent, and accountable. Without that, participation may exist in theory but fail to build public trust.
4. Inclusion is Not Enough to Ensure Good Outcomes for Low-Resourced Language AI
Several discussions at PAIRS focused on Indigenous and low-resourced languages. Many current initiatives aim to include these languages in AI systems by expanding datasets or improving models. But some speakers argued that inclusion alone does not guarantee good outcomes for the communities involved.
Matthew Kennedy examined the history behind many efforts to “save” endangered languages. He noted that similar claims were made during earlier linguistic projects under colonial rule. In both cases, outside experts framed communities as vulnerable, assumed technological modernization was inevitable, and positioned technical intervention as a form of rescue.
Today this pattern can appear in AI language projects. Efforts may involve better benchmarks, more participatory data collection, or improved cultural context in models. While these changes can be valuable, decisions about what counts as correct language use, what goals the systems pursue, and how results are used or commercialized often remain in the hands of outside experts.
Kennedy argued that the key questions are not only technical but also deeply political. What exactly is being modeled? Who determines “correct” language use? Would these projects look the same if they could not be monetized? These questions shift the focus from inclusion to authority.
This issue also affects proposals for data commons that support Indigenous or low-resourced languages. Some presentations suggested that locally governed data commons could reduce dependence on large technology companies and give communities more control over how their language data is used. But Kennedy’s critique highlights an important risk. If communities do not control the goals of modeling or how the results are used, a data commons could normalize the continued funneling of linguistic data into external AI development pipelines under the flag of preservation.
Low-resourced languages also create technical challenges. Large language models often perform less reliably in underrepresented languages. Safety guardrails may fail more often, and switching between languages can allow users to bypass moderation systems. These weaknesses increase the risk of harm in some contexts, including misinformation or security risks.
Expanding language coverage can therefore reduce exclusion while also creating new problems. The outcome depends on how systems are evaluated, governed, and deployed.
For this reason, a responsible language data commons must be more than a repository of digitized text. It needs governance structures that allow communities to influence how their language is modeled, validated, and used.
When communities have real authority, data commons can strengthen their position in the AI ecosystem. Without that authority, they risk reproducing older patterns of extraction.
5. Resistance Is a Form of Civic Participation
Another theme at PAIRS challenged the common assumption that resistance to AI is simply opposition to innovation.
Renée Sieber of McGill University argued that resistance is often dismissed as fear of technology or lack of understanding. In some cases, efforts to increase “AI literacy” or build public trust are designed less to address concerns and more to smooth the path for deployment.
Sieber and coauthor Roberta Du instead treated resistance as a form of civic participation. Their research maps the many ways people push back against AI systems. These strategies include protests, petitions, lawsuits, online activism, boycotts, labor organizing, citizen science projects, and even satire and art. People also build tools of their own to evade surveillance or challenge automated systems.
These actions target many kinds of AI systems, from military technologies and generative models to workplace monitoring tools and automated decision systems. The concerns behind them vary as well, ranging from privacy and labor rights to environmental impacts and democratic accountability.
Seen this way, citizens are not passive recipients of AI systems. They are political actors who can challenge how those systems are designed and deployed.
This broader understanding of resistance appeared in other presentations as well. Sanjay Sharma and Siddharth de Souza from the University of Warwick argued that participatory AI is often framed as inclusion within existing technological systems. This framing can limit debate by treating participation as consultation rather than as a challenge to existing power structures. Drawing on decolonial and data justice scholarship, they described participation as an ongoing struggle over knowledge, authority, and technological futures. Activism, litigation, and community-led data governance can therefore act as counterbalances to dominant technology development models.
Maria Lungu from the University of Virginia presented a concrete example in her research on predictive policing in Kenya. Civil society groups, journalists, and technologists have pushed back against surveillance systems such as Nairobi’s “Safe City” program. Their efforts focus on demanding transparency, accountability, and a voice in procurement and deployment decisions.
These cases show that resistance is not necessarily a sign that participation or social license have failed. In many cases, it can be one of the only ways communities can influence powerful institutions by imposing costs and reshaping incentives. Meanwhile, ignoring resistance means overlooking how legitimacy is actually negotiated in practice.
Conclusion:
PAIRS 2026 made clear that participation succeeds or fails at the points where it meets power: in agenda-setting, in institutional settings such as standards bodies and procurement systems, and in the technical points where AI systems are evaluated, updated, and deployed. Yet much of the field still treats progress as a matter of expanding inclusion: more workshops, more stakeholders, more voices at the table. Inclusion alone does not create impact. Without authority, clear pathways into decision-making, and institutional embedding, participatory processes risk remaining consultations that validate decisions already made.
For civil society to meaningfully shape how AI is built and governed, participation must be connected to structures that carry community priorities into real decisions over time. That means engaging communities before policies and system designs are locked in; linking deliberation to standards, procurement, and regulatory processes; embedding participation into evaluation, auditing, and red-teaming practices; designing data governance models that address authority and commercialization; grounding deployment in fair and transparent decision-making; and recognizing resistance itself as a form of civic participation. Approaches such as the Govlab’s social license for data reuse point in this direction by translating community expectations into enforceable conditions for how data and AI systems can be developed and used.