Primitive human chatbots: How growth in technology promises to push lawyers out of business

Primitive human chatbots: How growth in technology promises to push lawyers out of business

0

The hype cycle for chatbots – software that can generate convincing strings of words from a simple prompt – is in full swing. Few industries are more panicked than lawyers, who have been investing in tools to generate and process legal documents for years.

After all, you might joke, what are lawyers but primitive human chatbots, generating convincing strings of words from simple prompts?

For America’s state and local courts, this joke is about to get a lot less funny, fast. Debt collection agencies are already flooding courts and ambushing ordinary people with thousands of low-quality, small-dollar cases.

Courts are woefully unprepared for a future where anyone with a chatbot can become a high-volume filer, or where ordinary people might rely on chatbots for desperately-needed legal advice.

When you imagine a court, you might picture two opposing lawyers arguing before a judge, and perhaps a jury. That picture is mostly an illusion. Americans have the right to an attorney only when they’re accused of a crime – for everything else, you’re on your own.

As a result, the vast majority of civil cases in state and local courts have at least one party who does not have a lawyer, often because they have no other option. And because court processes are designed for lawyers, every case with a self-represented litigant requires more resources from courts, assuming the person without a lawyer shows up at all.

Add enough cases like this to a court’s docket, and the results are ugly. In the aftermath of the 2008 financial crisis, thousands of foreclosure cases hit court dockets all at once. Many of the cases were rife with defects: false affidavits, bad notarisations, backdated paperwork, inadequate documentation and so on. But foreclosures were pushed through anyway, and people lost their homes.

This wasn’t a one-off. It’s a warning of what happens when the world changes and courts don’t adapt. To see that future for robot lawyers, take today’s high-volume filers:  debt collections agencies. Small-dollar ($5,000 or less) debt cases, filed en masse by collections agencies, increasingly dominate local court dockets.

While nationwide data is hard to find (more on that later), in 2013, the Pew Charitable Trusts found that small-dollar debt cases made up a quarter of all civil (non-criminal) cases filed in the United States. In 1993, it was just over 10 percent. And cases are on the rise, in red and blue states.

The goal of debt collection cases is simple: Turn hard-to-collect debt into easy-to-collect wage garnishments. In most states, when someone loses a debt case, a court can order their employer to redirect their wages toward a creditor instead. The easiest way for that to happen?

When the defendant doesn’t show up, defaulting the case. The majority of debt cases end in default: Either the defendant chooses not to show, is confused about what they need to do or should do, or, just as often, never receives notice of a case at all.

“Sewer service,” where plaintiffs deliberately avoid notifying defendants of a legal case (for example, by sending a case to an old address), has been a festering problem in debt and eviction cases for decades, and continues to this day.

In some cases, people find out they’ve been sued only after noticing that their paycheque has been garnished.

When a case does default, many courts will simply grant whatever judgment the plaintiff has requested, without checking whether the plaintiff has provided adequate (or any) documentation that the plaintiff owns the debt, that the defendant still owes the debt or whether the defendant has been properly notified of the case.

Sometimes, even the math is wrong: One study of Utah’s courts found that 9.3 per cent of debt cases miscalculated the interest plaintiffs were entitled to after a judgment. In other words: garbage in, garnishments out.

Defenders of courts might argue that courts are built on the assumption that defendants will show up, and that courts simply don’t have the time or the resources to check every filing when defendants default. But the end result is a toxic recipe – defective cases, inadequate service, overworked courts – that is undeniably lucrative, even as it corrodes people’s faith in the legal system.

Think about this state of affairs for too long, and nearly every application of large language models in courts becomes a volume problem that courts aren’t equipped to handle.

Right now, ChatGPT can generate a half-decent eviction letter, or debt collection demand, which might be all someone needs to force a default. Why should a plaintiff care if a large language model generates a defective filing, if courts won’t check and defendants don’t show?

From there, it’s easy to see how large language models can help the powerful use the legal system as a cudgel. Today, small claims debt cases. Tomorrow, aggressive and deceptive eviction tactics from corporate landlords.

The next day, crowdsourced legal harassment of support networks for women who seek abortions, egged on by state bounty laws.

But take the optimistic scenario for a moment, where ordinary people who can’t find help from lawyers get help from chatbots instead. Every day, in every state, courts are visited by people who can’t access or afford lawyers, who don’t feel that the legal system is built for them, who feel their problems are intractable and their rights unobtainable.

Is it any wonder that they’re ready to turn to “robot lawyers,” even if the outputs have mistakes or user data isn’t kept confidential? The reality is that people will use tools like ChatGPT for legal help because they can’t get help anywhere else.

Maybe it works, and chatbots help people feel more empowered and confident about coming to court. Maybe the right tool, deployed the right way, helps people without lawyers overcome all of the procedural hurdles and avoid all of the potential pitfalls that arise when filing and defending court cases.

Maybe high-volume small claims or arbitration filings becomes a community organising tactic, a distributed alternative to class actions. And maybe, just maybe, large language models can be successfully deployed to help people defend themselves from predatory court cases.

But even if all those maybes come to pass, here’s the dirty secret about those debt case defaults: If each and every case were vigorously defended, not only would more defendants win, but courts everywhere would crack under the workload. Courts are incentivised to maintain a system that hurts defendants because they’re unable to manage the alternative. The real risk from AI in law isn’t putting lawyers out of work; it’s overloading courts with work, and sticking lawyerless defendants with the bill.

Preparing for the future doesn’t require expensive investments in cutting-edge technology. Even basic changes in how courts operate can have a big impact.

To head off defective cases, courts should incorporate design friction into high-volume filing processes. State and local courts overwhelmingly rely on basic PDFs, which put the burden of finding and correcting errors on overworked court staff.

Meanwhile, nearly every web form or API on the internet includes simple validation checks—you simply can’t submit a form that is incomplete or riddled with errors. Prompting high-volume filers to file cases as structured data instead of document dumps could head off the most egregiously defective and incomplete filings before they ever reach a court docket.

Simple court rule changes or document design choices like this can be the difference between people getting a fair hearing and accidentally defaulting.

More generally, embracing data can facilitate learning opportunities that courts have historically underinvested in: Courts could better understand parties’ legal needs, triage cases, and be more responsive to changes in filing trends; researchers could build a more complete picture of what legal interventions actually work, and where help is and isn’t effective.

Sometimes it’s just a simple policy question: For example, why do 46 states allow consumer debt to be turned into wage garnishments in the first place?

Decades past due, it is time to reform service or the process of notifying someone that they face a legal case. Plaintiffs and process servers alike should bear more responsibility for ensuring and proving that a defendant actually receives notice of a case, especially when that person is unlikely to have legal representation.

If any data broker off the street can buy and sell your address and location, courts should be able to verify a defendant’s current address and prove that process servers actually visited them.

Most of all, courts, policymakers and the legal profession they oversee must view the rise of software-powered advice for what it is: a cry for systemic reform. If software will continue to be a source of legal help and expertise, policymakers can help articulate the duties that makers of legal assistance software owe their users, from helping minimize errors and mistakes to protecting data about someone’s search for help. 

It’s always tempting to imagine the wild futures a new technology could unlock. But in the present, it’s the mundane problems that grind people down, and erode their confidence that a better future is possible.

For most people, the future of law doesn’t need to be an endless stream of AI-generated legal threats, of apps and wizards and bots helping to navigate needlessly complicated legal mazes. It just needs to be a source of help for the human problems people encounter every day.

  • A Wired report
About author

Your email address will not be published. Required fields are marked *