
Sarah O’Connor (“Can a machine be just?”, Opinion, September 2) characterises AI-powered dispute resolution as an “embryonic idea for the future”, in the UK’s public sector. However such systems are already being deployed in the US to facilitate dispute resolution and more effective government regulation.
In Michigan, the state’s supreme court recently struck an agreement with Learned Hand, a platform that assists judicial clerks with legal research and drafting.
In private arbitration, a new start-up called Arbitrus.ai now offers an AI arbitrator that parties can specify as their chosen arbitration method within commercial contracts.
Within the US government, regulatory bodies are harnessing AI to augment human capacity. The US Patent Office runs AI-assisted “prior art” searches to establish whether an invention is already publicly known or available, in whole or in part, before the effective filing date of a patent.
The US Food and Drug Administration’s Elsa platform, launched in June, has already cut multi-day review tasks to minutes.
O’Connor also asks whether “computer says no” could ever feel fair, suggesting people won’t accept AI resolution. But empirical evidence suggests otherwise.
A recent field experiment on 70,000 job applicants, conducted by researchers from the University of Chicago and Erasmus University Rotterdam, found that when given the choice, 78 per cent opted for AI-led interviews over human ones — and those AI interviews led to more job offers and better retention. Research consistently shows that users value consistency and speed in routine disputes; perceived fairness increases dramatically when AI systems provide explanations, maintain human oversight, and allow appeals.
The choice isn’t whether to use AI in dispute resolution, or government administration more broadly, but how to ensure AI serves justice while managing an increasingly automated world.
Read more on Financial Times News

