Governing Artificial Intelligence — Groundbreaking New Book on Ethical AI

Artificial intelligence (AI) now shapes many aspects of dispute resolution, from the systems that schedule mediations to the tools that help parties upload documents, communicate asynchronously, or navigate negotiation platforms. Mediators also use AI regularly. As these technologies become embedded in the daily work of mediators, they raise pressing questions about fairness, transparency, autonomy, and the safeguarding of confidential information. These concerns come sharply into focus in Governing Artificial Intelligence (Brill | Nijhoff, 2025, https://brill.com/abstract/title/72670), a timely and deeply considered book by Leah Wing, Chris Draper, Scott Cooper, and Daniel Rainey. The volume – the third in a series focused on issues in online dispute resolution – offers a sophisticated yet accessible exploration of how AI should be understood, controlled, and guided within systems that bear directly on justice and conflict engagement.

Why This Book Matters for Mediators, Technologists, and Policymakers

For mediators and scholars alike, the book’s most practical contribution is its detailed examination of international online dispute resolution (ODR) standards, particularly the 2022 ODR Standards issued jointly by the International Council for Online Dispute Resolution (ICODR) and the National Center for Technology and Dispute Resolution (NCTDR). These standards have already become foundational standards for ethical digital practice, and they offer a model for thinking about AI governance more broadly. The standards are publicly available at https://icodr.org/Standards/.

At the heart of Governing Artificial Intelligence is an argument that resonates strongly with the dispute resolution community: no single mechanism can effectively govern AI. Legislation provides essential boundaries and democratic legitimacy, yet statutory frameworks move slowly, often lack technical depth, and are restricted to particular legal jurisdictions. Professional standards offer rich, implementable requirements, but cannot stand alone without legal scaffolding. International technical standards bring precision and cross-border consistency, although they cannot be enforced across national boundaries. The authors advocate for a braided governance model in which regulation, international standards, and professional ethics operate in concert rather than in isolation. And the discussion of ODR Standards illustrates a relatively unique example of falling under the category of both international and professional standards.

This perspective aligns naturally with mediation philosophy. Mediators understand complex systems not as binary structures, but as dynamic webs of relationships, interests, expectations, and procedural commitments. Good mediation practice recognizes that ethical action is rarely dictated by one rule or requirement; it emerges instead from the thoughtful integration of principles, context, and professional judgment. The book invites mediators to extend that mindset into the digital and AI-infused environments in which they increasingly work.

One of the book’s early contributions is its careful explanation of what we mean when we talk about AI. Rather than treating AI as a monolithic entity, the authors describe it as a spectrum of technologies—ranging from automated sorting and document recognition tools to machine‑learning systems capable of predicting behaviors or generating text. These systems, while powerful, are not neutral. They are shaped by data, by design choices, and by the institutional cultures into which they are integrated. Mediators who understand this socio-technical framing are better prepared to recognize where AI may support procedural fairness and where it might inadvertently undermine party voice or reproduce existing inequities.  The book also engages directly with how the AI industry conceptualizes “safety.” While companies often focus on catastrophic or headline-grabbing harms, mediators are trained to attend to subtler disruptions—misunderstandings, implicit bias, diminished autonomy, and uneven access to process. These everyday harms can be as significant as large-scale failures,

One of the most compelling aspects of the volume is its treatment of international and professional standards, which the authors position as indispensable complements to regulation. Standards translate broad principles—fairness, accessibility, accountability—into operational requirements. They can articulate technical requirements (such as auditability or explainability), procedural safeguards (like human-in-the-loop review), and design expectations that help ensure technologies serve the needs of real users. For mediators, whose work depends on trust, neutrality, and informed participation, standards offer a means of evaluating whether digital platforms genuinely support these values or merely claim to.

The book’s case study on ODR standards is particularly salient because ODR has become a widespread mediator environment, especially since the pandemic. Many mediators now conduct sessions online, use digital case‑management systems, or work within portals developed by courts or agencies. AI may already be embedded in these platforms in subtle ways—organizing data, generating summaries, or suggesting negotiation options. Understanding the standards that shape these tools is essential for maintaining ethical practice.

The NCTDR-ICODR ODR Standards as a Model for Ethical AI-Supported Dispute Resolution

The 2022 NCTDR-ICODR ODR Standards provide exactly the kind of ethical requirements mediators and developers need. Developed collaboratively and grounded in earlier ethical principles, they articulate nine interdependent commitments essential to ethical dispute resolution in digital environments: accessible; accountable; competent; confidential; equality; fairness and impartiality; legal; secure; and transparent. For mediators, accessibility means far more than simply offering an online option. It involves supporting participants with varying technological abilities, linguistic backgrounds, and bandwidth constraints. Accessibility in this sense aligns with the mediator’s commitment to procedural justice, ensuring that every participant can meaningfully engage with the process.

The emphasis on accountability is especially relevant in an era of AI-augmented platforms. Accountability requires that systems be auditable, that the role of AI be transparent, and that human oversight remain central. For practitioners, this raises important questions: How is the platform generating its recommendations? What data are being used? Who can review, correct, or override automated suggestions? These questions mirror the concerns mediators already address when managing expert input, procedural design, and power dynamics.

Competence within the NCTDR-ICODR ODR framework extends beyond substantive knowledge of conflict processes. It calls for technological fluency and cultural and linguistic sensitivity—recognizing that digital environments can magnify misunderstandings or amplify disparities if practitioners are not prepared to navigate them.

Confidentiality, long a pillar of mediation ethics, takes on more complex dimensions online. The standards require clear, transparent policies on data storage, destruction, access, and breach response. Mediators who understand these requirements can better protect parties and ensure that the tools they rely on uphold the standards the profession demands. Within the NCTDR-ICODR ODR framework, Security supplements Confidentiality, requiring ODR platforms to restrict unauthorized parties from accessing data and requiring platforms to disclose data breaches and prevent reoccurrence.

Equality, coupled with Fairness and Impartiality, captures a principle that sits at the core of ethical dispute resolution: no participant should be disadvantaged by the process itself. Digital environments risk reproducing or even exacerbating offline inequalities. The NCTDR-ICODR ODR Standards remind practitioners to remain vigilant about bias, power disparities, conflicts of interest, and structural disadvantages that may manifest through technology. Further, no participant in a process should have a technological or informational disadvantage because of their use of ODR.

A Legal ODR process conforms with, upholds, and shares with users appropriate rules and regulations. Transparency asks ODR providers to commit to disclosing how process outcomes are enforced, providing clarity with respect to the risks, costs, and benefits of participating, and sharing metrics and data from platform assessments and audits. Further, the NCTDR-ICODR ODR Standards require clear compliance and disclosure with respect to whether and how ODR systems and platforms use and share data with AI systems, particularly in the context of generating options or making decisions.

A Governance Ecosystem Already Taking Shape

The NCTDR’s standards hub documents that the International Organization for Standardization (ISO) (with 175 countries’ standards bodies as members) issued ISO 32122 (2025), an international standard governing ODR in e-commerce, which adopted the 2022 NCTDR-ICODR ODR Standards in full. This represents a rare and encouraging example of professional dispute resolution standards informing formal international technical requirements. It also illustrates the braided governance model the book champions: professional ethics informing international standards, which in turn guide courts, platforms, and governments.

For mediators and policymakers alike, the message is clear: AI will increasingly shape the environments in which conflict is resolved, and understanding its governance is now part of maintaining professional competence. Mediators do not need to become AI engineers, but they do need to understand how AI tools operate within their platforms, how to explain these tools to parties, and how to identify when AI may be influencing the fairness or clarity of the process.

For scholars, the book and standards offer a fertile direction for inquiry. They invite deeper exploration into the ways formal law, technical standards, and professional ethics converge across different domains. They also highlight the potential for fields like mediation and ODR to shape AI governance more broadly, not just respond to it.

Here is our interview with two of the authors of the book and leaders in ICODR: Leah Wing and Daniel Rainey.

 

 

Also, the Cyberweek 2025 presentation provides an engaging window into the authors’ thinking and expands on their vision for an AI governance ecosystem grounded in democratic values and ethical practice: Cyberweek 2025 Panel Introducing the Book:
https://www.youtube.com/watch?v=8mpPGcwcRDM.

Conclusion

Governing Artificial Intelligence invites scholars and mediators to see AI governance not as an abstract, distant policy challenge but as a set of practical, interconnected considerations that already influence daily practice. Coupled with the 2022 NCTDR-ICODR Standards’ detailed requirements for ethical digital dispute resolution, the book offers a powerful framework for shaping fair, inclusive, and transparent AI-supported systems. For a profession grounded in human dignity and procedural justice, these resources provide both reassurance and direction—a reminder that mediation’s core values remain essential guides as our tools evolve.

Share it :