Agents Who Argue: Advancing the Use of LLM Multi-Agent Systems for Public Decision-Making
The Centre for AI in Government (CAIG), with the Institute for Data and AI (IDAI) at the University of Birmingham, hosted the Agents Who Argue workshop.
The Centre for AI in Government (CAIG), with the Institute for Data and AI (IDAI) at the University of Birmingham, hosted the Agents Who Argue workshop.
This one-day event, hosted on Tuesday 1 July 2025, brought together researchers, technologists, and public sector practitioners to explore how Large Language Model (LLM) Multi-Agent Systems (MAS) could support deliberation, negotiation, and complex decision-making in public governance.
LLM MAS is a technology which combines large language models, such as those powering today’s most advanced chatbots, with systems made up of multiple interacting agents. Each agent can represent a different viewpoint, stakeholder, or goal, and is capable of reasoning, negotiating, or debating with others. When used together, these systems can simulate complex social dynamics, offering a new way to explore decision-making processes, public deliberation, and policy outcomes before they happen in the real world.
The workshop was specifically designed to unite different academic disciplines and stakeholder communities around this emerging area of research. LLM MAS is an inherently interdisciplinary field that draws on computer science, social science, public administration, law, and ethics. Developing this method in isolation risks producing ungrounded or unworkable tools. The workshop aimed to bridge those gaps by fostering a space for shared language, experimentation, and early-stage collaboration.
The event took place at Elm House on the University of Birmingham campus and was expertly facilitated by Becky Evans and Gill Bates of Treehouse. Despite the summer heat, the energy in the room remained high throughout the day as participants worked through a carefully designed process of reflection, ideation, and collaborative prototyping.
Participants began with a series of discussions exploring where LLM MAS might offer value, particularly in public sector contexts such as service delivery, long-term planning, and justice. Teams then developed and refined example use cases, asking not just what we can build, but what should be built, and under what conditions.
The resulting ideas included AI companions for public service workers, simulation tools for anticipating future social scenarios, models for mapping stakeholder dynamics in political systems, and frameworks for assessing when AI reasoning is safe and appropriate in legal decision-making. These concepts were not just imaginative but rooted in the real-world complexities and constraints that any applied AI system must navigate.
Throughout the day, a consistent theme emerged: meaningful progress in this space will require collaboration across technical and domain expertise. LLM MAS systems must be stress-tested in context, with deliberate attention to how they interact with social norms, institutional priorities, and human judgment.
The Centre for AI in Government is now exploring opportunities to carry this momentum forward and support future collaborations between participants. The ideas generated at the workshop provide a strong foundation for further research and development, and a compelling case for continued interdisciplinary work in this space.
To stay up to date on CAIG’s work, future events, and funding opportunities, follow us on LinkedIn or visit our website.