The use of artificial intelligence (AI) in mediation gives rise to innumerable ethical questions. What do we mean by AI? Should a mediator use AI, depending on the meaning? How should a mediator use AI? How transparent should a mediator be with the parties when using AI? How does a mediator identify, test, or correct AI’s biases? Amy and Bill spoke with Tina Patterson, an international arbitrator and mediator with more than two decades of dispute resolution experience specializing in FinTech, technology, intellectual property, and domain disputes. Tina shared four ethical challenges and four practical ideas for mediators.
Challenge #1 – Confidentiality
Whenever mediators use AI, they should carefully consider whether the AI tool will maintain confidentiality with respect to the information shared.
Mediators should avoid sharing confidential party information like mediation briefs and party proposals with AI platforms unless they are in a closed environment, and the mediator understands the privacy limitations of the tool. Most public AI platforms use the data they receive to train themselves, whereas closed systems do not use data inputs for training.
Consider where the platform’s servers are located. Perhaps U.S. servers are secure. What about servers located in foreign territories? Do mediators know the data and information laws of the territories where servers are located? What if the government in a territory claims the right to the information on the server?
Challenge # 2 – Informed Consent
“When in doubt, spell it out.” Tina urges mediators to be transparent with respect to whether and how they are using AI. She urges mediators to disclose their use of AI if it influences the mediator’s process or if it has an impact on any “party-facing document.” Disclosure is critically important when mediators use ChatGPT, Gemini, or Claude for analysis or decision making, but disclosure does not extend to scheduling tools, spell check, or Grammarly. The latter AI products do not analyze party information, positions, or proposals. Patterson’s perspective is centered on trust: “Better to over-communicate than lose trust.” We agree that disclosure is essential, and the ABA Opinion on the use of GenAI says the same.
Challenge #3 – Bias Amplification
Patterson points out that AI leverages already existing data. If a data set is riddled with bias, or if court judgements are consistently lower for women and minority parties, the algorithm will compound the bias. Indeed, language processing AI may interpret assertiveness differently by gender, race, or ethnicity. Patterson points out that risk assessment tools may be biased against pro se parties. Because all individuals have their own biases, those biases may be reflected in coding for the algorithm, which forms the foundation for the AI platform. Algorithms are value-laden. Further, how can a mediator account for the power imbalance between individuals who have access to AI assessment tools and those who do not? It gets tricky as the AI becomes more robust and plays a larger role in the practice of law.
To combat some of AI’s potential biases, Patterson urges mediators to test AI’s assumptions – just like mediators test party assumptions throughout the process.
Challenge #4 – Mediator Judgement
“Over-reliance on AI erodes what makes mediation work, and it’s that human connection.” Patterson is concerned that the next generation of mediators may never learn to read a room but will instead rely exclusively on AI for decision-making. She points out that human connection is core to the mediation practice and that we mediators should develop foundational in-person mediation skills before embedding AI into our mediator toolbox. We agree!
Tip #1 – Create Your Framework Now
Determine how you will use AI in your practice and test it out. Further, consider your personal limits for AI. Some mediators may not use AI whatsoever, others will use AI for scheduling and research, and still others will feed AI confidential information and urge parties to consider AI-suggested solutions.
Tip #2 – Disclosure Is the Default
Tell parties how you are using AI and let parties know when you are not using AI in your practice.
Tip #3 – Keep Up with Technology
“What is acceptable today may not be tomorrow.” Patterson urges mediators to join discussion groups and bar associations to stay up to date with advancing technology.
Tip #4 – AI Is Your Assistant, NOT Your Co-mediator
Patterson explains that mediators are responsible for the decisions they make – not AI. Arguing “the AI told me to do it” is not a defense.






