Notice & Comment

The Transparency Machine, by Talia Gillis

*This is the sixth post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at ScienceThe Economist, and Kirkus.

Orly Lobel’s rich and insightful book provides a clear-eyed view of the use of artificial intelligence in personal and societal domains and its distributional implications. The book sets out to evaluate how technology can promote equality, which is far from the natural perspective of us law professors, who tend to feel more comfortable as technology alarmists. Lobel challenges us to take the perspective that “digitization and automation are here to stay” and to find ways in which technology can “do better than our current systems” even when it is not perfect.

Lobel’s book raises many important questions around how technology can promote equality. I would like to focus on two ways, highlighted by the book, in which fairer outcomes are related to how AI can increase transparency. The first way in which AI and automation can increase transparency, is by explicitly stating the basis for decision-making. Machine-learning decision-making is often described as opaque and a black box, primarily because of the non-linear complex ways in which algorithms use features to classify and make predictions, but, as Lobel points out, “what about the black box of the human mind?” When a mortgage broker offers one borrower a higher interest rate than to another borrower, is this because the borrower is more likely to default, because they face fewer outside options, or because of some other bias? It may be impossible to answer this question definitely in a world of human decision making. As I have discussed in my own work, in a world of mortgage pricing algorithms there is likely greater transparency as to the basis of an offer or recommendation. When decisions are automated and are based on empirical relationships rather than human intuition, we may be better able to understand, analyze and criticize those decisions.

The second way Lobel points to how technology can promote transparency is through requiring an explicit articulation of the tradeoff of different policy goals. As Lobel writes, “There are times when we face difficult choices between competing values.” In a world of opaque human decision-making, we are not always required to be explicit about those competing values. Take for example, fair lending’s disparate impact doctrine. Scholars have long debated whether disparate impact is meant to address discriminatory intent masked by facially neutral policies or whether it is meant to address policies that create impermissible disparities in the promotion of equal financial opportunities, regardless of intent. In reality, this distinction may not be meaningful when humans with hidden intentions make credit decisions. But in the algorithmic context this distinction is pivotal, as an algorithm set up to predict creditworthiness may not intend to discriminate but could lead to impermissible disparities. If so, how do we tradeoff the increased accuracy of default predictions and their distributional impact? The increased transparency of automation relative to human decision-making requires us to confront this question head-on. 

It remains unclear whether the transparency opportunities brought about by AI will be embraced. As Lobel notes, “We measure what we care about. We collect data about what is important to us,” and there is little ability to measure and analyze distributional impacts if we chose not to collect demographic information. This has long been a problem in fair lending, where non-mortgage lenders are limited in their ability to collect information on protected characteristics for the purpose of monitoring disparate impact. To avoid creating an algorithmic myth of colorblindness, we must collect the information necessary to measure equality. 

Lobel’s book contributes to the discussion on AI regulation at a critical moment as countries and organizations consider and adopt rules to ensure fairness in algorithmic decision-making. The EU proposal circulated in April 2021, for example, avoids defining algorithmic discrimination and often relies on opacity by requiring humans-in-the-loop as a would-be panacea for distributional and fairness concerns. Effective regulation needs instead to focus on a clearer articulation of the goals of AI regulation and careful consideration of how data collection and automation can actually promote fairness. Lobel’s book is a major step in steering us in the right direction. 

Talia Gillis is an Associate Professor and the Milton Handler Fellow at Columbia Law School, and an Affiliate of the Columbia Data Science Institute.