The Right to a Human Regulator?
With the Supreme Court considering SEC v. Jarkesy last Term, much of the administrative law field debated whether federal agencies should be able to adjudicate disputes in house, or whether some disputes must be tried by a jury in an Article III federal court. The Supreme Court ultimately held that the U.S. Constitution provides a civil jury right when the SEC seeks civil penalties. Since then, we have debated the scope of that holding for other agency adjudication schemes. That debate will no doubt continue.
In the regulatory trenches, however, there is a different yet somewhat related debate ongoing: when it comes to regulatory activities, how much can agencies rely on machines to make decisions. Over at the Regulatory Review, Cary Coglianese has an engaging essay, entitled A Right to a Better Decision, which reviews Aziz Huq’s important law review article A Right to a Human Decision.
Here’s a snippet from the essay:
In power centers around the world, policymakers, judges, and lawyers are grappling with the question of what role humans versus machines should play in making governmental decisions. This moment of collective reflection makes as timely as ever an important law review article written by legal scholar Aziz Huq entitled, A Right to a Human Decision. Huq analyzes possible normative justifications for such a right and finds each wanting. He suggests that, instead of insisting on a right to a human decision, we should insist on a right to better decisions—whether by humans or by machines.
Over the last year, calls for a right to human decisions have grown strikingly palpable, as ChatGPT and other large language models have demonstrated remarkable proficiency at tasks long performed only by humans. Lawyers have especially taken note because the version of ChatGPT released in March 2023 passed the uniform bar exam all on its own—and at the 90th percentile.
. . .
Although Huq ultimately finds no intrinsic moral objection to government’s reliance on automated systems, he recognizes that limits to their adoption can and should be based on “technical constraints” and “practical grounds.” In other words, we should always ask whether an automated system is actually a better one. And by “better,” Huq means a system that leads to “a well-calibrated machine decision that folds in due process, privacy, and equality values.”
This means, in the end, that the challenge with governmental use of AI tools is to make sure that these tools have been thoughtfully designed, adequately tested and validated, and repeatedly subjected to audits to ensure that any problems that arise can be addressed early. These are, as it happens, some of the basic parameters of the emerging normative structures governing AI in the public sector. Around the world, governments are establishing rules and standards that call for AI tools to be tested and audited thoroughly, especially before they are put into use in consequential ways.
With his analysis of claims to a right to a human decision, Huq has produced a major work of legal scholarship that, even in the face of rapid technological change, is likely to remain relevant and important for years to come. Eventually, though, society may reach a point where calls for a right to a human decision not only fade away but actually are replaced with calls, at least in some instances, for a moral or legal right to machine decisions—ones that are better than those made by humans.
Definitely go read Coglianese’s essay here.