Fresh Ideas on AI

Seasoned mediators might imagine the next generation of advocates and dispute resolution practitioners – the vast majority who have grown up with the internet and devices at their fingertips – might unabashedly advocate for embedding AI into their dispute resolution practice. We sat down with two third-year law students at the Ohio State University Moritz College of Law to discuss their caution about using AI in mediation. Both anticipate graduating in May 2026; both completed the college’s Mediation Clinic in the fall 2025 semester, where they mediated live cases, and each completed a seminar paper at the intersection of mediation and AI.

Ethan Bowers has focused on election law, criminal law, and alternative dispute resolution while in law school. In January, he and his teammate placed fourth at the ABA’s national negotiation competition. Jon Michael Gaudin is the current Editor-in-Chief of the Ohio State Journal on Dispute Resolution. Our conversation with Jon Michael focuses on challenges that might emerge for mediators when they rely on AI. With Ethan, we discuss how AI could compound challenges for self-represented litigants. To conclude, we briefly discuss AI and the future of legal practice.

Atrophy

As Jon Michael explains in his seminar paper and in our conversation (see the video below), reliance on AI can diminish human skills necessary for a high-quality mediation practice. Even well-intentioned use of AI can quietly and subtly erode the competencies of mediators. Fundamental mediator traits like critical thinking, deep understanding of party narratives, neutrality, and problem-solving are at risk.

Consider a construction mediator who begins shifting their “evaluation” of party positions and perspectives to a generative AI platform. They further ask the AI platform to suggest strategies for generating movement. Perhaps the mediator takes steps to “train” the AI platform and has addressed confidentiality concerns. The mediator works with the AI platform to prepare for and between mediation sessions, further using the platform between caucuses and on breaks, particularly when crunched for time.

Jon Michael points out that this use of AI might begin as well-intentioned – the mediator appears to be trying to leverage AI to improve their own tools for generating movement. However, the mediator becomes increasingly reliant, offloading the cognitive work to the AI platform. He suggests that humans are inclined to trust technology: “you don’t double-check your calculator.” AI platform output should be tested for accuracy, and the underlying assumptions of the dataset and algorithms should be questioned. Those who become reliant on AI without pausing to check output and questioning assumptions are at risk of diminished core mediation competencies. Indeed, this is a form of cognitive atrophy – the loss of neurons and connections leading to a decline of critical thinking and problem-solving skills.

In his seminar paper, Jon Michael urges mediators to consider an ethical duty of cognitive integrity. Mediators should supplement, not substitute, core cognitive tasks. Much as a mediator questions the assumptions of the parties, mediators should question and test AI’s assumptions and outputs.

What About the Parties, Particularly Pro Se?

Self-represented litigants are extremely common in civil actions, with most SRLs choosing to go pro se due to the cost of hiring counsel. SRLs who rely on AI when preparing for litigation and mediation are uniquely susceptible to the risks of Generative AI. Ethan highlights key concerns with the use of AI and some of the specific challenges it poses for SRLs. Here we briefly discuss hallucination and sycophancy.

AI conveys or hallucinates information that looks like it could be real, but simply is not. Sophisticated mediators and counsel may not be equipped or able to spot hallucination – SLRs are further disadvantaged because they often do not have access to or knowledge about resources to sift through what is real and what is not. Indeed, even attorneys are regularly relying on hallucinated case law (and courts often admonish them). Compounding the challenge of hallucination, Ethan further points out that AI’s outputs often rely on data that is biased against minority or disadvantaged populations.

Sycophancy occurs when AI reinforces a viewpoint regardless of truth or accuracy. AI has been shown to reinforce the perspective of the individual with which it is communicating. In the context of mediation, AI may encourage parties to take unreasonable or unsubstantiated positions because it is trained to curry favor with the human it is working with, leading to (potentially unfounded) positional entrenchment for SLRs.

Just Because It Looks Good . . .

Both Ethan and Jon Michael express concern that AI’s work product looks good but may not be good. Because the “looks good” work product is easy and efficient, they fear many will increasingly rely on AI to make complex decisions without considering underlying challenges with AI algorithms, datasets, and output. They are particularly concerned about the mediator who is pressed for time or the young associate who is trying to keep up in a fast-paced environment. Ethan and Jon Michael hope mediators and the dispute resolution community will maintain a healthy ethic and skepticism when they work with and rely on AI.

Share it :