Discrimination and the Human Algorithm, by Mark Lemley
*This is the third post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at Science, The Economist, and Kirkus.
Legal scholarship around artificial intelligence (AI) has focused enormous attention on the problem of discrimination by AI. And justly so. Scholars have pointed out that algorithms make categorical judgments about people based on classifications, including race, gender, and other attributes that the law treats as problematic. Indeed, they may have no alternative but to do so. And even when AIs try to avoid racial or gender profiling, they often end up using proxies that are correlated with race, like geography, education, class, or other attributes. The conclusion of this scholarship is relatively straightforward: AI doesn’t solve, and may even worsen, the problem of discrimination, perpetuating it and ensconcing it in a numerical result that may be impossible to dislodge.
But all advantages are comparative. Before we reject the use of AI, we should be careful to avoid a human alternative that is worse. And it may well be. AIs don’t set out to discriminate. They might be programmed to do so, but that is likely to be rare. Rather, if an AI discriminates, it is generally because it is trained on existing data or modeled on behavior or goals set in the human world and that data or behavior turns out to be discriminatory. When Amazon’s hiring AI found that the two best predictors of success at Amazon were being named Jared and playing lacrosse, it wasn’t discriminating; it was reflecting back a history of human hiring practices that had advantaged rich white men. That doesn’t mean we shouldn’t worry about AI discrimination; we should. But discrimination by AIs is almost always a reflection of discrimination by people.
There is one important difference between AIs and people: AIs have to show their work. No judge sentences a criminal defendant unconscious of whether that defendant is a man or a woman. And they undoubtedly take that fact into account. But they don’t talk about gender in explaining why they sentence a man to a longer prison term than they would a woman. The difference is there, but it’s hidden. Nor does the cop who disproportionately stops minority drivers have to explain that he is doing so – perhaps not even to himself. AIs have no such luxury. So it’s not just that AI discrimination tends to reflect human discrimination. It is that the human discrimination is so often hidden. Amazon doesn’t have discriminatory hiring policies, and the people doing the interviewing likely don’t think they are discriminating. But they end up hiring people who look like them, people who are named Jared and play lacrosse. Much of the backlash against discrimination by AIs reflects, I suspect, not a worse record by AIs than humans but the fact that we can actually see what is going on behind the scenes.
Orly Lobel’s book The Equality Machine provides a welcome and much-needed counterweight to the literature demanding that we restrict AIs because they discriminate. She does not deny the problem of discrimination by AIs. Quite the contrary. But she correctly notes that AI offers something human decision-making doesen’t: the chance to be deliberate in designing systems that recognize and confront the imperfections too often hidden in our human society. As she notes, “[t]o embrace digitization as a force for societal good, we don’t need to find it perfect. We only need to be convinced of its potential and ability to do better than our current systems.” Lobel at 5. As Lobel notes, “human decision-making is inherently limited, flawed, and biased.” Lobel at 5. AI offers the potential to design systems that do better than we ourselves have proven able to. AI won’t be perfect. But by making explicit the biases that are too often implicit, we can recognize, confront, and hopefully reduce them. Lobel’s book offers a blueprint for building that brighter future.
Mark Lemley is the William H. Neukom Professor at Stanford Law School and a Partner at Durie Tangri LLP.